Book Club: The Edge of Reason 8, scientific morality

Here we come to the eighth installment of my running discussion about Julian Baggini’s excellent book on the nature of rationality, The Edge of Reason: A Rational Skeptic in an Irrational World. In this chapter Julian takes on those people — like Sam Harris — who want to reduce moral philosophy to neuroscience or some similarly misguided enterprise. I must admit, there is such a convergence of thinking between Julian and myself on this that reading the chapter was like indulging in philosophical porn…

Right off the bat, Baggini summarizes what is wrong with the scientistic approach (did I mention I have a book on this topic coming out soon?): “Champions of the rational are often their own worst enemies, especially when they happen also to be scientists. … [they push] an excessively narrow understanding of what reason involves, which is essentially evidence-based empiricism, no more and no less. … [this is an] iniquitous intellectual land grab, in which all meaningful discourse is claimed for science and anything else is razed to the ground as useless.” I could stop here, really. But let’s continue. As I said, it was an Epicurean dip for me.

Julian quickly moves on to his favorite example of such malfeasance: Sam Harris’ The Moral Landscape (which I have reviewed, very unfavorably, for Skeptic magazine, even though Michael Shermer censored the final bit of my review, in which I suggested that if someone wanted to learn something about moral philosophy better read Michael Sandel than Sam Harris).

Baggini explains that he is picking on Harris because “the chief value of The Moral Landscape is that it is one of the clearest articulations of the scientistic approach to ethics, which is often less brazenly expressed.” (For another brazen expression, see this discussion I had with the above mentioned Shermer.) Baggini actually interviewed Harris at his home in California, and based his commentary on such interview.

Harris told Julian that “We know that morality has something to do with human well-being and we know that human well-being must be arising from the physiology of the brain and therefore is constrained by whatever psychophysical laws are in in fact true of the brain, and therefore we know it falls potentially within the framework of science.”

As Julian immediately points out, a lot hinges on exactly what one means by “something to do,” and “constrained.” Yes, of course morality has to do with human well-being (actually, more broadly, with the well-being of sentient creatures), and it is constrained by human biology and culture — no philosopher would argue otherwise. But that’s far short of what’s needed to establish a science of morality. Sure enough, Baggini immediately acknowledges that empirical evidence, and therefore science, is informative on a number of ethical issues. For instance, the question “how should I raise my children?” does require input from child developmental psychology, among others. But there are a number of ways to raise one children given the same understanding of developmental psychology. That is, the science — as always — underdetermines the philosophical options. That’s why values are not straightforwardly reducible to empirical facts, which in turn means that one cannot collapse moral philosophy into science.

Julian again: “It simply does not follow from the fact that some things are objectively bad [for human beings] from a scientific point of view that science can determine all that is right or wrong. Take, for example, the old dispute between Mill and Bentham as to whether the pleasure of playing a simple game like pushpin has as much value as the pleasure derived from playing Chopin. Science cannot resolve this dispute.” And before you suggest it, no, it would be ridiculous to try to settle the matter by measuring the intensity of the activity of the pleasure centers of the brain: if you go that way (which actually Harris does, in his book!) you will have to conclude that the most moral thing to do is to hook everyone up to a drug delivering machine for their entire lives. I hope I don’t have to explain to you why this isn’t the moral thing to do.

Baggini notes that Harris concedes that nobody has yet proposed a way to read morality straight off, say, neuroscans. But Harris then engages in a significant amount of hand waiving to argue that not having an answer yet doesn’t mean there is no answer in principle (while at the same time not even giving a hint of what this “in principle” route would look like). Julian’s retort is that “well-being” is not a biologically meaningful category (as a biologist, I wholeheartedly agree), and that there are plenty of instances in which people choose pain and suffering because they think it is the moral thing to do: “The idea that brain scans could reveal to us what form of life is morally better is absurd because brain scans are value-neutral.”

Harris, in the course of the interview, says: “What does it mean to say it’s really true that something is wrong? If you push there, you either have to come down to some truth that falls within the purview of science — that there’s something about our world, human nature or the prospects of human happiness that admits of truth claims — or you’re just left with preferences: wrong just because we don’t like it or a majority of people don’t like it.”

But Julian immediately objects that this is a false dichotomy, that moreover misunderstands the nature of both reason and ethics: “Outlooks, values and beliefs can be more or less reasonable, more or less objective.” (See this old post of mine suggesting as much.)

Moreover, Harris did not invent anything knew. Just consider this bit from John Stuart Mill, back in 1872: “The backward state of the Moral Sciences can only be remedied by applying to them the methods of Physical Science, duly extended and generalized.” Mill’s project, however, immediately failed because of his introduction of the distinction between “high” and “low” pleasures, a qualitative dichotomy that simply cannot be backed up by any “physical science,” and yet is the only thing that saves post-Bentham utilitarianism from descending into a search for the minimum common denominator that makes everyone “happy” (which would be the above mentioned drug hook-up).

After taking care of Harris, Julian then moves on to the opposite mistake, in a sense, made this time by scientistic philosophers like Alex Rosenberg, author of the Atheist Guide to Reality: Enjoying Life Without Illusions. (I reviewed that one too, again, not positively.)

The connection between Harris and Rosenberg is explained very clearly by Baggini: “Harris is not necessarily representative of mainstream scientific thinking about morality. It is telling, however, that the more common alternative view is equally simplistic and extreme. This is the view that science debunks ethics. Science does not determine human values, it reveals them to be a kind of fiction.”

Here is an example of Rosenberg’s approach: “(i) What is the difference between right and wrong, good and bad? There is no moral difference between them. (ii) Why should I be moral? Because it makes you feel better than being immoral. (iii) Is abortion, euthanasia, suicide, paying taxes, foreign aid or anything else you don’t like forbidden, permissible, or sometimes obligatory? Anything goes.”

Julian finds it hard to believe that people like Rosenberg are serious about this, rather than just playing a (sick, I might add) intellectual game. Here is why: “it is interesting that [Rosenberg] does not add to his list child sexual abuse, rape, torture of the innocents and so on. To say ‘anything goes’ after a list like that would be extremely hard to take seriously.” Indeed.

The problem, concludes Baggini in this section of the chapter, is this: “The mistake is to believe that the methods of science have a monopoly on the practice of reason. From this it follows that morality must either be taken under the wing of science or cast out as irrational.” This mistake, of course, runs contrary to Baggini’s own careful analysis of what reason is, which we have explored in detail in the past several posts.

None of the above, however, means that science is irrelevant to moral questions. One of the most obvious examples is that of abortion — interestingly, one of those I also bring forth in the context of these discussions, and pretty much along the lines sketched by Julian in this chapter.

Let’s say we arrive at a position that says that abortion is permissible up until the moment in which the fetus begins to feel pain, and after that only if the life of the mother is in danger. (This is for the sake of discussion, not necessarily my or Baggini’s position, so don’t get worked up about it.) Well, then it is up to science — and in particular neuroscience and developmental biology — to give us the best estimate of when that is actually the case. But arriving at that specific criterion, rather than other possible ones, is a matter of philosophical dialogue, not (just) empirical evidence.

Julian also says, again, very similarly to what I’ve been writing for a while now, that another scientific input into the question of morality comes in the area of understanding the origin of the human moral sense. Here it is comparative anthropology, evolutionary biology, and primatology that play the crucial role.

Then there is the contribution of neuroscience to our understanding of how the brain arrives at moral decisions. Interesting, scientifically, but again not at all the same thing as a science of morality. Why? Because “people all over the world have the same basic brain circuitry and yet moral norms differ enormously.”

As an example, Julian compares how the Inuits and the Polynesians treat deception on the part of a group member: it is a capital offense in the first case, but only gets you a slap on the wrist in the second case. Why? Because the living conditions of Inuits are such that deception can cost the lives of several group members, or even the survival of the entire group. Not so under the more benign environmental conditions enjoyed by the Polynesians. The brains are the same, and so is their deep evolutionary history. But the cultural conditions are dramatically divergent, because of their very different environments.

Evolutionary psychology too doesn’t really help settle moral questions. For one because the fact that something is natural (rape, for instance, according to evopsychs Randy Thornhill and Craig Palmer) obviously doesn’t make it right (that would be an appeal to nature, an informal fallacy); and second because “to conclude that evolutionary psychology debunks ethics by showing that it is ‘nothing more than’ reciprocal altruism or enlightened self-interest” is an example of “the genetic fallacy: confusing an account of something’s origins with its justification.”

By the end of the chapter Julian arrives at the very same conclusion I have been defending for years, as astonishing as it is that it actually needs defense: “A scientifically informed ethics is to be welcomed, but a purely scientific ethics is an impossibility.”

159 thoughts on “Book Club: The Edge of Reason 8, scientific morality

  1. Disagreeable Me (@Disagreeable_I)

    Hi Massimo,

    I don’t agree with Harris entirely, but I think there are a few points to be made in his defence.

    But there are a number of ways to raise one children given the same understanding of developmental psychology. That is, the science — as always — underdetermines the philosophical options

    That’s the whole reason for Harris to talk of a Moral Landscape, a landscape with many different peaks (scenarios conducive to well-being) and valleys (scenarios deleterious to well-being). His view is that science can tell us where these peaks and valleys lie, not which (roughly equivalent) peaks we ought to prefer. His proposed science of morality is about uncovering all the empirical facts about which choices lead to “good” outcomes, not about choosing between outcomes which are approximately as good as each other. So the fact that science underdetermines what choice we ought to make is no real criticism of Harris. It is enough for him that science constrains what choice we ought to make. At least we can avoid the valleys and find some peak or other.

    That said, Harris might still argue for choosing one peak over another based upon its “height”, or how well it supports well-being, and like you I doubt there could be any objective metric of such a thing. Different people prefer different flavours of well-being and so there is no one overall objective picture of the landscape that science can provide for us. That’s the problem with Harris, not underdetermination.

    it would be ridiculous to try to settle the matter by measuring the intensity of the activity of the pleasure centers of the brain: if you go that way (which actually Harris does, in his book!)

    I actually don’t think he does, although I can see why you might have interpreted him that way. There’s a reason he describes what he wants in terms of well-being rather than pleasure, and it is to avoid this kind of misunderstanding.

    I think he would say that if we prefer living fulfilling lives to being blissed out on drugs, then there must be neurological facts to correspond with this preference — something different about a brain that is being satisfied with a life well lived as compared to a brain that is being stimulated to ecstasy. He would identify well-being with the former and not the latter. Rather than basing his morality on measuring the activation of the pleasure centres, he would base it on the neurological facts that correspond to well-being or eudaimonia.

    However, that being said, again I can again turn that around and offer what I think is a better criticism of Harris on similar grounds — that it’s hard to see how if there are neurological facts corresponding to eudaimonia as opposed to mere pleasure (and I think there must be), why these facts could not also be instantiated artificially, and yet many of us would prefer not to seek eudaimonia thus.

    there are plenty of instances in which people choose pain and suffering because they think it is the moral thing to do:

    I don’t think this criticism works. Harris would say that people who choose pain and suffering because it will somehow promote well-being (e.g. by helping to oppose tyranny, or to exercise and so promote their own fitness and wellness) are doing the right thing. People who choose pain and suffering which will not promote well-being (e.g. people who feel a religious duty to torture themselves in the belief that it will bring rewards in the afterlife) are simply mistaken.

    Moreover, Harris did not invent anything knew.

    So what? He’s the leading contemporary proponent and populariser of this view. Who cares if he originated it? I’m not aware of him making any such claims.

    Anyway, I agree with your conclusion. Harris’s project ultimately fails. But I think he is often misinterpreted and taken to be making stronger claims than he is. He’s wrong, but not as trivially and self-evidently wrong as you take him to be. He deserves more credit than you are willing to grant him.

    Like

  2. SocraticGadfly

    On Harris, Baggini could also mention that “well-being” isn’t necessarily the same as ethics.

    On Thornhill/Palmer, I’ve mentioned before that they’ve stacked the deck in several ways. Namely, in our tribalist EEA past, a rapist would quickly be known — and killed. No evolutionary advantage at all.

    Like

  3. Daniel Kaufman

    Well said, Massimo. I don’t find the Harrises of the world worth devoting much energy to. If I want to engage with scientistic treatments of value, I’ll turn to the much more serious, much smarter B.F. Skinner and his “Beyond Freedom and Dignity,” which probably is the best of all such efforts. Like a fly is easily swatted away, Harris is easily dispensed with by simply asking him “Why ought I to promote human well being?” which is a perfectly well-formed, evaluative question. While he’s struggling with that one, I can escape into the crowd and seek out more interesting people to talk to.

    Liked by 2 people

  4. Daniel Kaufman

    DM wrote:

    His view is that science can tell us where these peaks and valleys lie, not which (roughly equivalent) peaks we ought to prefer. His proposed science of morality is about uncovering all the empirical facts about which choices lead to “good” outcomes, not about choosing between outcomes which are approximately as good as each other.

    = = =

    Then he is quite confused about the subject of moral philosophy, part of which is to determine what ‘good’ means and what counts as good. To fail to do this — to simply describe what causes outcomes we already agree are good is not to do moral philosophy, axiology, or anything else philosophically interesting.

    Liked by 2 people

  5. Bunsen Burner

    Socratic:

    ‘Namely, in our tribalist EEA past, a rapist would quickly be known — and killed’

    I suspect you probably mean only if that rape was against a woman of the same tribe as the rapist, and moreover one belonging to a high status male.

    Liked by 1 person

  6. Disagreeable Me (@Disagreeable_I)

    Hi Dan,

    Harris is easily dispensed with by simply asking him “Why ought I to promote human well being?” which is a perfectly well-formed, evaluative question.

    He would not agree that this question flummoxes him. He would say this is simply what “ought” means, and anyone who would not agree that it is good to promote well-being and bad to promote misery is either disingenuous or a psychopath, or otherwise not to be listened to. His position in this regard is not so different from Baggini’s sentimentalism.

    He wouldn’t necessarily agree with this characterisation, but to me he seems to be more or less advocating a certain convention — that we take words such as “ought” and “moral” and “good” and “bad” to refer to being about well-being, because what else could they be about? At least in Harris’s view, if not quite mine and probably not yours, well-being is the only sensible answer.

    Then he is quite confused about the subject of moral philosophy, part of which is to determine what ‘good’ means and what counts as good.

    I wouldn’t say he is confused, because he doesn’t really present himself as engaging in the practice of academic moral philosophy. He’s a polemecist and a public intellectual, but not a traditional moral philosopher.

    His agenda has much more to do with addressing the moral relativism or nihilism of a Rosenberg than answering what “good” means and what counts as good. He’s reacting against the rejection of moral questions as meaningless we see from the likes of Coel. He’s trying to reconcile a science-oriented world view with morality, not entirely successfully.

    He doesn’t seriously engage with academic philosphy, with the deontology of Kant or the virtue ethics of Aristotle, because moral philosophers are not the community he is trying to reach. He’s trying to reach the community of scientistic thinkers like himself.

    His strategy is to declare by fiat what good means and then saying that as long as we accept his definition (and he thinks few reasonable people will seriously object to regarding well-being as good and misery as bad), then it’s not the case that anything goes, or there are no facts of the matter about what is good or what is bad. There are right and wrong answers to moral questions (by which he means questions pertaining to well-being) and science can find them (he thinks).

    Like

  7. Daniel Kaufman

    DM wrote:

    He would say this is simply what “ought” means, and anyone who would not agree that it is good to promote well-being and bad to promote misery is either disingenuous or a psychopath, or otherwise not to be listened to.

    = = =

    Well, if he is simply going to do axiology by way of stipulative definition, then the question certainly has flummoxed him.

    = = =

    DM wrote;

    His strategy is to declare by fiat what good means and then saying that as long as we accept his definition (and he thinks few reasonable people will seriously object to regarding well-being as good and misery as bad), then it’s not the case that anything goes, or there are no facts of the matter about what is good or what is bad. There are right and wrong answers to moral questions (by which he means questions pertaining to well-being) and science can find them (he thinks).

    = = =

    A few things. First, the marketing of the book is rather disingenuous if this is the case. Second, Harris really ought to stop going on panels with real moral philosophers and theologians and having debates about questions of value.

    Like

  8. synred

    I’ve mentioned before that they’ve stacked the deck in several ways. Namely, in our tribalist EEA past, a rapist would quickly be known — and killed. No evolutionary advantage at all.

    We have tribalist societies today. These tend to kill to women not the man which leads the women not to report the rape and try to cover it up.

    This could be formulated as a ‘just so’ story to explain why rape is under reported…

    …which says more about ‘evolutionary psychology’ than it does about morality, or good and evil.

    Liked by 1 person

  9. Massimo Post author

    Bunsen,

    “I suspect you probably mean only if that rape was against a woman of the same tribe as the rapist, and moreover one belonging to a high status male.”

    It’s hard to speculate, but I gather the best understanding is that in early human history people lived in small bands of mostly related individuals. So it is hard to imagine that perpetrators of rape would not have been swiftly identified and punished.

    DM,

    I’m going to try to address some of your points, but I need to premise that reading this chapter for me was a strong validation of my own take on Harris. If two thoughtful philosophers who are familiar with both the scientific literature and the moral philosophical one (i.e., Julian and yours truly) have arrived at exactly the same take on Harris, down to using — independently of each other — similar counterexamples, I think this further suggests that Harris supporters are simply not clear on what their paladin is up to.

    “His view is that science can tell us where these peaks and valleys lie”

    No. Science can tell us what causes pain or pleasure. The “peaks” are defined by people’s values, not by straightforward empirical evidence.

    There are peaks in the moral landscape, but as I explained at length in my book on the nature of philosophy (http://tinyurl.com/mwb3zho) they are conceptual (different types of virtue ethics, Kantian deontology, different kinds of utilitarianism), not empirical.

    “So the fact that science underdetermines what choice we ought to make is no real criticism of Harris”

    Underdetermine here means something different: it doesn’t underdetermine the peaks, it underdetermines the choice of values.

    “Harris might still argue for choosing one peak over another based upon its “height””

    He does argue that, but since I don’t believe the peaks are empirically determined, this becomes irrelevant.

    “There’s a reason he describes what he wants in terms of well-being rather than pleasure, and it is to avoid this kind of misunderstanding.”

    He is actually ambiguous about this in the book. At one point he does say that we will eventually look at neuroscans and “discover” what stimulates people’s pleasures. Hence the drug-machine objection.

    “I think he would say that if we prefer living fulfilling lives to being blissed out on drugs, then there must be neurological facts to correspond with this preference”

    Nobody doubts that. Everything we do/think has a neural correlate.

    “he would base it on the neurological facts that correspond to well-being or eudaimonia.”

    Which, as Baggini says, are not meaningful scientific categories.

    “Harris would say that people who choose pain and suffering because it will somehow promote well-being (e.g. by helping to oppose tyranny, or to exercise and so promote their own fitness and wellness) are doing the right thing.”

    Yes, but on what empirical/scientific bases would he say that? And no, people don’t oppose tyranny just to increase their own fitness. That’s an incredibly reductive view of human values.

    “Who cares if he originated it? I’m not aware of him making any such claims.”

    I care about intellectual honesty. He may not make that claim directly, but the fact that he doesn’t engage with the philosophical literature and ignores Mill is either sloppy or dishonest. Or both.

    “He deserves more credit than you are willing to grant him.”

    I guess we are going to have to disagree on this one. I think of him as a self-serving intellectual lightweight, who unfortunately has had a lot of impact. Which says something (not good) about our contemporary society.

    Liked by 1 person

  10. Robin Herbert

    Hi DM,

    His strategy is to declare by fiat what good means and then saying that as long as we accept his definition (and he thinks few reasonable people will seriously object to regarding well-being as good and misery as bad), then it’s not the case that anything goes, or there are no facts of the matter about what is good or what is bad.

    That is the problem with his position. It makes “You ought not to hurt people” mean ‘Hurting people hurts people”.

    An act which visits misery on everyone including the perpetrator is not so much bad as stupid. Most bad actions manage to promote well-being for some – in fact this is the entire point of most actions which are normally considered bad.

    So if “good” is just promoting well being and “bad” is promoting misery then a group which improves its own well being by increasing the misery of another group is doing something which is partly good, partly bad.

    I don’t think that jells with the way most people use those words.

    Liked by 1 person

  11. ejwinner

    DM,
    “His proposed science of morality is about uncovering all the empirical facts about which choices lead to “good” outcomes”
    Ethics is not about outcomes. It is about commitment and obligation. Once committed one is obliged to certain actions, often regardless of outcome. That why the question of value – what we are committed to – becomes crucial, and not reducible to either neurological functions or simple calculi of desires and fears.

    Liked by 3 people

  12. Coel

    Quoting Baggini:

    Champions of the rational are often their own worst enemies, especially when they happen also to be scientists…. [they push] an excessively narrow understanding of what reason involves, which is essentially evidence-based empiricism, no more and no less.

    This what critics of scientism say that such people do, but it isn’t true. Scientism is not about narrowing down reason to just “evidence-based empiricism”, it is about recognising that science itself is much broader than that. It’s not a narrow conception of reason, it’s a broad conception of science.

    That’s why values are not straightforwardly reducible to empirical facts, which in turn means that one cannot collapse moral philosophy into science.

    Correct; of course you can’t get values from science. But nor can you get values from philosophy. You can only get values from people. (In my earlier days arguing on the internet with religious people I would regularly state: “People don’t get their morals from religion; religions get their morals from people”; I think I need a new version of this substituting the word “philosophy”.)

    Julian finds it hard to believe that people like Rosenberg are serious about this …

    Useful tip: if you find it hard to believe that someone is actually serious, consider that you are likely misunderstanding them. In this case Baggini misunderstands Rosenberg. The sentence: “Is abortion, euthanasia, suicide, paying taxes, foreign aid or anything else you don’t like forbidden, permissible, or sometimes obligatory? Anything goes.” is addressed to the moral realist, and is saying that there is no objective reason (no non-human reason, no binding force from beyond humanity) that obligates or forbids any of those.

    But, Rosenberg is saying, humans are moral animals, and from those human feelings we negotiate with each other and work out how we want to live in society. This is clear in the wider context of the book. To quote Rosenberg: “Scientism is nihilistic, but we are not”. That distinction is crucial. He is not in any sense recommending or accepting a society in which “child sexual abuse, rape, torture of the innocents” are permitted.

    Harris, in the course of the interview, says: “What does it mean to say it’s really true that something is wrong? If you push there, you either have to come down to some truth that falls within the purview of science […] or you’re just left with preferences: wrong just because we don’t like it or a majority of people don’t like it.”

    For all the bashing that Harris gets from philosophers, he is actually spot on there. (Where Harris does go wrong is in rejecting the latter option, and so then trying hard, and failing, to make the former option work — which, by the way, is not typical of scientists or scientismists, mostly they go for the latter option.)

    Baggini is more confused that Harris here. He’s spent the last two chapters getting himself confused about the concept of objectivity, and now says: “Outlooks, values and beliefs can be more or less reasonable, more or less objective”. No they can’t, values can only ever be subjective. And there is nothing wrong with that; in no sense does it make them invalid or unimportant!

    Hi DM,

    He’s reacting against the rejection of moral questions as meaningless we see from the likes of Coel.

    But I don’t; I take talk about morals are referring to human feelings and values. There is nothing “meaningless” about that. It’s the only actual meaning that moral language has.

    Liked by 1 person

  13. Coel

    Hi Dan,

    Then he is quite confused about the subject of moral philosophy, part of which is to determine what ‘good’ means and what counts as good.

    So what does morally “good” mean?

    Like

  14. Disagreeable Me (@Disagreeable_I)

    Hi Massimo,

    I think this further suggests that Harris supporters are simply not clear on what their paladin is up to.

    You and Baggini perhaps think similarly, and perhaps have a somewhat similar background, being academic philosophers, so perhaps it’s not all that surprising that you have a similar take on Harris. I wouldn’t be so sure that the similarity of your views is much of a validation that you are interpreting him correctly and his followers are not.

    There are peaks in the moral landscape, but as I explained at length in my book on the nature of philosophy (http://tinyurl.com/mwb3zho) they are conceptual (different types of virtue ethics, Kantian deontology, different kinds of utilitarianism), not empirical.

    You’re comparing apples and oranges. Your landscape is that of the potential conceptual positions. Harris’s landscape is the landscape of different ways we could set up society (corresponding to coordinates such as latitude and longitude) and the ways these would affect human well-being (corresponding to altitude).

    Harris’s landscape has peaks and valleys, and so is closely analogous to a real landscape. Your landscape doesn’t really have peaks. It has different flat regions corresponding to different philosophical positions. They’re completely different things.

    Underdetermine here means something different: it doesn’t underdetermine the peaks, it underdetermines the choice of values.

    Hmm, you present a dichotomy here which is a different one than the one I thought we were discussing, so this is putting me off track a little bit.

    Put it this way. Consider 3 distinct questions

    Whether science can determine our values
    Whether science can determine the peaks in the moral landscape (i.e. facts about how certain policies or choices will impact on well-being)
    Whether science can determine what we ought to do.

    So, initially I took you to be complaining about underdetermination in question 3, but now you seem to be complaining about underdetermination on question 1.

    On question 1, Harris takes it for granted that we value well-being. He doesn’t claim that science can determine this. This is just his starting point. He finds any other starting point on morality deeply implausible and/or disingenuous. This is basically Baggini’s point on sentimentalism except expressed in terms of valuing the well-being of others rather than finding the fact that others have interests to be a compelling reason to help others advance their interests.

    On question 2, this is Harris’s whole point, and you seem to agree with him (while finding this to be a trivial and uninteresting point).

    On question 3, Harris does not claim that science can determine what we ought to do in cases where there exists more than one choice which would promote well-being as much as possible. This corresponds to a case where there are two equally high peaks we can reach. In these cases, he acceps that the answer is underdetermined by science. There is no one right answer, there are two or more, any of which are acceptable. This is the issue I took you to be referring to you when you mentioned that we could have all the facts about psychology and yet make different choices about how to raise our kids.

    He is actually ambiguous about this in the book. At one point he does say that we will eventually look at neuroscans and “discover” what stimulates people’s pleasures.

    If he is ambiguous then he could be read in two ways, and perhaps it’s better to give him the benefit of the doubt. I actually thought he was pretty clear that he was not talking about naive pleasure but more about eudaimonia. He might mention pleasure centres but I think only insofar as pleasure plays some role in eudaimonia. He would hope to find other neural correlates of eudaimonia to build a fuller picture. In any case I think it’s clear from the text that Harris would not regard stimulation of the pleasure centres as a peak in the moral landscape, since empty pleasure does not seem to correspond to how he thinks of well-being.

    And no, people don’t oppose tyranny just to increase their own fitness. That’s an incredibly reductive view of human values.

    I didn’t say this. You misread me. I gave two distinct examples.

    Accepting pain to oppose tyranny
    Accepting pain to increase one’s own fitness

    but the fact that he doesn’t engage with the philosophical literature and ignores Mill is either sloppy or dishonest.

    Fair point, I guess. As long as he is aware of Mill, and I think he is. But he’s more speaking his own mind to his audience I think than seeking to engage with the literature. Mill has priority but I suspect Harris would be putting forth the same view even without Mill. Even so, omitting Mill clearly wouldn’t be acceptable in an academic paper. As a popular book for a scientistic new-atheist audience? Perhaps still not acceptable, but not everyone cares about literature review and citation in such contexts. I personally find it hard to get too exercised about it.

    Like

  15. Disagreeable Me (@Disagreeable_I)

    Hi Robin,

    Harris cares about net well-being. An act which increases well-being for some and decreases it for others is for Harris either moral, immoral or morally neutral, depending only on the net effect.

    This might not gel particularly well with the moral intuitions of everybody, but it’s usually not all that far out.

    A tricky case for Harris is where I make myself twice as happy with an act which hurts you to some lesser extent. Perhaps I steal $1,000,000 from Jeff Bezos, who is rich enough not to miss it. This act helps me a lot (assuming that getting a million dollars is a good thing for my well-being, which is debatable I know). A net improvement in well-being perhaps, but not what most people would regard as overly moral. All the same, plenty of thieves and scam artists somehow feel this kind of reasoning can justify their actions at least to themselves, so it has some intuitive hold on some people it seems.

    Like

  16. wtc48

    “and he thinks few reasonable people will seriously object to regarding well-being as good and misery as bad”: I love the idea that somewhere it may have been debated that the death penalty should be imposed on people for being miserable.

    Like

  17. Alessio Persichetti

    In this case I must agree with Baggini, having the same meta-ethical idea. At least, when we keep the distinction in morality between “descriptive” and “prescriptive”: science can give us a description of what we do when we take a moral decision, and the underlying processes (i.e. brain activity or historical contingencies); but regarding the prescription of what we must or should do correctly, it is not an empirical issue, but a conceptual one. Then, it belongs to what Wilfrid Sellars, Richard Rorty and Robert Brandom have called the “space of reasons”, whose objectivity is given by confrontation in the rational discourse: hence — following Kant — normativity can’t be deducted from a state of affairs in the world, including either common accepted actions and the structure of our central nervous system.

    Liked by 2 people

  18. SocraticGadfly

    Bunsen, with the exception that he didn’t actually call for use of a machine for delivery, actually, that’s “Brave New World.”

    Surprisingly, businesses haven’t followed through with something like dosing company water coolers with low-grade antidepressants. (THC wouldn’t work legally in America, though it would surely WORK even better.) Or maybe companies ARE putting Prozac in the water, or aerosoled in the HVAC, and we just don’t know.

    Liked by 1 person

  19. Daniel Kaufman

    Coel wrote:

    “So what does ‘good’ mean?”

    = = =

    Welcome to the field of axiology. I spent almost 8 years studying it as well as other subjects. There is a ton of literature with many different answers. Highly worth exploring. And not avoidable if you want to speak seriously on the subject.

    In short, I don’t have an answer for you, other than to point to the relevant discipline and encourage you to delve into it. As Harris has done none of that however, the answer certainly isn’t to be found in his hack work.

    Liked by 2 people

  20. Daniel Kaufman

    DM wrote:

    Rather than basing his morality on measuring the activation of the pleasure centres, he would base it on the neurological facts that correspond to well-being or eudaimonia.

    = = =

    You do realize that this is actually meaningless right? For one thing what eudaimonia is depends on which which philosopher you read. For another, how could neurological facts “correspond to” something like eudaimonia, which literally means “human flourishing”? Aristotle, for example, believes that to flourish as a human being is to live a life of moral excellence or a life of intellectual excellence or a life involving excellences in various crafts, where each excellence itself involves any number of value judgments, often at the social level, and even after one has died. (He famously says that we cannot really determine whether his someone’s life was one of flourishing, until we see what his descendents have done with the projects he started.) What would it even mean to speak of “neurological correspondence” with something like this?

    This is not just bad ethics, its bad thinking period. That a hack like Harris can be so successful is profoundly depressing. If we ever needed expert gatekeepers in the form of editors and genuine public intellectuals, it’s now, in this age of open, unfiltered information.

    Liked by 1 person

  21. Massimo Post author

    Coel,

    I’m going to write only one reply to you, because we’ve covered this many, many times, and we are quite obviously not going to convince each other. (Btw, I knew you would simply hate this chapter…)

    “This what critics of scientism say that such people do, but it isn’t true.”

    You keep telling us what scientistically-oriented people really believe, and yet Baggini directly interviewed Harris, and anyone can read what Harris, Shermer, Pinker, and all the others actually write about stuff. Oh, and there is Rosenberg, who in my book gets the prize for by far the most scientistic philosopher or scientist of the 21st century (not a compliment).

    “if you find it hard to believe that someone is actually serious, consider that you are likely misunderstanding them.”

    I wish you applied that maxim to yourself. Let’s see: on one side we have Baggini and myself (among many others), professional philosophers, and one even a professional scientist. On the other side (in the case of Harris) we have a popular writer with a degree in neuroscience. Who, I might add, explicitly dismisses all of the technical literature in the field he is criticizing as “increasing the boredom of the universe” (his words). And we are the ones who should be asking that question?

    I know you were referring specifically to Rosenberg there (who at least is a professional philosopher), but no, he wasn’t just addressing the moral realist, I don’t know where you get that. Rosenberg is a self-professed nihilist, not just a moral anti-realist, so Baggini’s disbelief is spot on. Alex (I’ve met him) simply doesn’t live according to his own precepts. He does think that certain things are morally reprehensible, but he has decided to “follow his reasoning” wherever it may lead, precisely the sort of thing Baggini warns against at the beginning of his book. And which Hume would have wholeheartedly approved.

    “For all the bashing that Harris gets from philosophers, he is actually spot on there”

    No, and several people have explained this to you a number of times. I don’t know how else to get it across, so I’ll stop there.

    “Baggini is more confused that Harris here.”

    To paraphrase you, perhaps whenever you think that a professional philosopher who has given a lot of thought to X is hopelessly confused about X you should pause and ask yourself if you are not perhaps the one missing the point.

    DM,

    This will be my last response to you as well. I appreciate your contributions (and by all means, do continue to engage), but we have also covered much of this territory before, so you are by now perfectly familiar with my views.

    “I wouldn’t be so sure that the similarity of your views is much of a validation that you are interpreting him correctly and his followers are not.”

    Well, again, Baggini and I have read him carefully and critically, and have the necessary background to do so. Most of his followers have done neither, or lack the latter.

    “You’re comparing apples and oranges. Your landscape is that of the potential conceptual positions”

    I’m not comparing anything, I’m saying that Harris’ landscape does not exist, only the conceptual one does, as far as moral philosophy is concerned.

    “Consider 3 distinct questions:

    Whether science can determine our values
    Whether science can determine the peaks in the moral landscape (i.e. facts about how certain policies or choices will impact on well-being)
    Whether science can determine what we ought to do.”

    No to the first one. Concerning the secon one, as I said, the landscape in question does not exist. In the third case, science can inform us as to what we ought to do, insofar as empirically relevant aspects or implications of our values come into play.

    “I actually thought he was pretty clear that he was not talking about naive pleasure but more about eudaimonia”

    But eudaimonia isn’t something that can be measured by a neuroscientist. No, he’s pretty clear in what he writes. The fact that it is incoherent with other parts of his book is part of my point, not a criticism of my criticism.

    “He would hope to find other neural correlates of eudaimonia to build a fuller picture”

    How would that help? What does that even mean? Once more: anything we say or do has “neural correlates,” because we need our brains to say or do anything. But just like finding the neural correlates of mathematical reasoning is irrelevant to solving mathematical problems, so goes for ethical reasoning.

    “Accepting pain to oppose tyranny
    Accepting pain to increase one’s own fitness”

    Thanks for the clarification, though I still don’t see how that helps Harris.

    “omitting Mill clearly wouldn’t be acceptable in an academic paper. As a popular book for a scientistic new-atheist audience?”

    Unacceptable as well. Imagine if I had written my How to Be a Stoic without mentioning Epictetus or Marcus Aurelius, as if the whole idea was mine. Intellectual dishonesty would be the obvious judgment.

    Liked by 3 people

  22. saphsin

    I don’t think Sam Harris has any real commitment to utilitarianism other than some intuitive reaction to the concept “oh we should judge actions by their consequences, that’s just obvious” Like take say his commentaries on American Foreign Policy and Israel.. He stresses the importance of the proclaimed intentions of Elites & our Western Values than whatever consequences our actions have on other people in the Middle East while applying different standards to Islamic Terrorist Groups.

    Liked by 1 person

  23. valariansteel

    Hi Socratic:
    “Surprisingly, businesses haven’t followed through with something like dosing company water coolers with low-grade antidepressants.”

    I don’t think it would work because of the pharmacokinetics of antidepressants. The patient must take the antidepressant daily (oral route of administration) for several weeks before the medication reaches a therapeutic level in your bloodstream. If workers are administered this antidepressnat through a water cooler, the administration would be intermittent, unless the patient was required to drink daily the same amount (and continuing drinking on weekends). Why not just give them a pill to take daily? Presto.

    It may be that newer treatment options work differently, such as ketamine infusion. Of course, IV route of administration gets the medication into the bloodstream quicker than the oral route. But even with ketamine, there is discussion about needing serial infusions.

    Like

  24. brodix

    “Surprisingly, businesses haven’t followed through with something like dosing company water coolers with low-grade antidepressants.”

    Isn’t that what the coffee machine is for?

    Would it be safe to say that morality is emergent? Such that trying to reduce it to a set of rules is like trying to reduce language to a universal grammar? Platonism rears its head again.

    It serves a necessary function, but that doesn’t mean it can be perfectly ordered. Consequently every effort to do so ends up in some form of reductio ad absurdum. Sort of like where we are with many forms of religion.

    Like

  25. Mark Sloan

    Hi Massimo,

    Both Harris and Rosenberg’s views are easy targets, but I see them as outside the science of morality’s mainstream.

    Harris has his peculiar bindingness claims for increasing “well-being”. Rosenberg takes the odd view that the objective, empirical science of morality somehow debunks morality. (All this science debunks is the truth basis of the emotional feeling of cultural morality’s imperative bindingness, an adaptation that makes us better cooperators. Morality’s objective reality separate from human existence is a different issue which is independent of bindingness.)

    A much more difficult target for criticism is the growing consensus that the function (the primary reason it exists) of ‘morality’ is to increase the benefits of cooperation in groups. (Here ‘moral’ behaviors are those motivated by our moral sense and advocated by cultural moral norms – descriptively moral behaviors in moral philosophy.)

    Of course, philosophers are free to argue morality ‘ought’ to be something else than cooperation strategies. But until there is general agreement on that subject (which I doubt there ever will be) science still offers insights about what morality ‘is’ that appear useful for refining cultural moralities to better meet human needs and preferences.

    This science can readily answer practical questions such as:

    Why is the Golden Rule usually such an effective heuristic (a usually reliable, but fallible, rule of thumb) for moral behavior and when does it fail?

    Why have eating shrimp and homosexual sex sometimes been classified as moral abominations?

    Why are “Do not kill, steal, or lie”, respect for human rights, and justice as fairness all common moral norms?

    But the science of morality can do more than just usefully explain the why and how of these descriptively moral behaviors. This science is consistent with the objective reality of an ultimate source of morality, independent of human existence. That ultimate source is the cooperation/exploitation dilemma, how to sustainably obtain the benefits of cooperation without exploitation (a usually winning strategy) destroying those benefits.

    All intelligent species must solve this dilemma in order to form highly cooperative societies. That is, all intelligent species must encode solutions to this dilemma in their biology or culture (preferably in both), else they cannot form highly cooperative societies. People call these solutions moral behaviors. Other intelligent species may call them something else, but these solutions will have the same function as morality does for people.

    Perhaps surprisingly, this species independent dilemma also implies the existence of a universally moral subset of solutions to the cooperation/exploitation dilemma. This subset solves the cooperation/exploitation dilemma without exploiting anyone. Descriptively moral norms such as “slaves must obey their masters” and “homosexuality is evil” obviously exploit out-groups. But people have to sustainably cooperate in an in-group, exploiting no one, in order to cooperatively exploit out-groups – thus this subset is universally moral.

    So it appears to be objectively true that behaviors that solve the cooperation/exploitation dilemma without exploiting anyone are cross-species universally moral. Any claim that contradicts this is making a kind of category error about what morality ‘is’.

    Of course, science cannot tell us what our ultimate goals or obligations ought to be or answer important questions such as “How ought I live?” or “What is good?”. And science is poorly equipped to develop coherent, well-function moral codes. But if a society is looking to define the moral code expected to best meet their shared needs and preferences, I know of no better candidate universal moral principle to base it on than the one innate to our physical reality “behaviors are universally moral that solve the cooperation/exploitation dilemma and exploit no one”.

    Liked by 1 person

  26. Robin Herbert

    Hi DM

    A tricky case for Harris is where I make myself twice as happy with an act which hurts you to some lesser extent.

    That is a bit like me saying that I have a great method for solving Einstein field equations apart from the tricky cases that involve tensor analysis.

    The “tricky cases” are pretty much the subject matter of ethics and morality.

    No one lost much sleep wondering whether or not it was permissible to do something that nade everybody including themselves happier.

    Suppose there is one society where the majority have a good deal of well being but it depends on the misery of the few.

    Suppose there is another society where everyone just about gets by, but what well being they do have doesn’t depend on anyone’s misery.

    So is the first society a more moral society than the second? I imagine that the intuition of most would favour the second although the first appears to have more well being.

    Harris says we can ground our values on a continuum where the peaks and troughs represent greatee and lesser well being, but in order for there to be that continuum you need some sort of an aggregation function that tells you one combination of states of well being is better than another.

    The only way of doing this would already involve values.

    So Harris is asking us to ground our values on a continuum which is grounded on our values.

    Liked by 1 person

Comments are closed.