Book Club: The Edge of Reason 5, the challenge of psychology

Let us continue our in-depths discussion of Julian Baggini’s The Edge of Reason, a book that aims, in a sense, at striking a balance between the Scylla of scientistic rationalism and the Charybdis of anti-rational relativism. Chapter 5 concerns what Julian calls “the challenge of psychology,” the idea that since much of our thinking is unconscious, we are not really rational beings, as much as rationalizing ones.

The chapter begins with a short introduction to the famous trolley dilemma, introduced by philosopher Philippa Foot as a tool to bring out our moral intuitions. I will not summarize the thought experiment, since it is well known. Baggini says that it is obvious that when many people “go consequentialist” in one version of the dilemma, and “Kantian” in another, this is because different psychological intuitions, not any explicit moral reasoning, are at play. Which immediately brings him to Daniel Kahneman’s famous distinction between “System 1” and “System 2” reasoning: the version of the dilemma that involves a more personal interaction with others is likely to trigger our emotional responses (System 1), while the impersonal version activates our thinking in terms of large numbers and consequences (System 2).

The problem, of course, is that it may be difficult, philosophically speaking, to make sense of one’s diverging reactions to the different situations posed by the trolley dilemma: “if asked why we should not push the person, we don’t say, ‘I don’t know, it just feels wrong.’ Rather, we come up with various rational justifications, such as the idea that it is wrong to use a person as a means to an end — even when this is just what we were prepared to do in the lever case.”

Kahneman himself seems pretty pessimistic about the sort of inference about human reasoning that we should make from his research: “when asked if his 45 years of study had changed the way that he makes decisions, [Kahneman] had to reply, ‘They haven’t really, very little, because System 1, the intuitive system, the fast thinking, is really quite immune to change. Most of us just go to our graves with the same perceptual system we were born with.’”

Setting aside that even the interviewer had a hard time taking Kahneman’s words at face value, Baggini says “not so fast,” so to speak. He points out that System 1 is an “enemy of reason” only if we conceptualize reason as identical to formal logic, which he has been at pains to argue, in the previous five chapters, is far too narrow a conception.

Julian maintains that the sort of “gut feelings” we sometimes have, especially, but not only, when it comes to moral situations, are in fact the result of quick heuristics embedded into System 1: “Heuristics are cognitive shortcuts, and the key is that they wouldn’t have evolved if they didn’t work more often than not. The problem is that they are so deep rooted that we often find ourselves using them even when we don’t need a quick, snappy solution but cool, calm reasoning.”

Julian seems to hint, in the passage above, that these System 1-based heuristics are the result of biologically rooted instincts, and surely in part that is the case. But I don’t see why they cannot also be the outcome of accumulated experiences, and more likely a deeply intertwined combination of both.

Baggini goes on to suggest that it isn’t at all obvious — as utilitarians, or Kantian deontologists, would argue — that moral questions ought to be analyzed solely on the basis of “cold” (i.e., impartial) reason. The most obvious case, he maintains, is that of parental love. As parents we are partial to our children, and given a choice between intervening on behalf of our child or on behalf of a stranger’s child, we do not hesitate and choose the former. And rightly so, says Julian, as the world wouldn’t likely be a better place if everyone treated their kids as random members of the population. That, of course, generates a tension between “local” ethics (i.e., our personal moral decisions) and “universal” ethics (what we should do when we think of humanity at large). Welcome to the human condition, where sound judgment (which, remember, for Baggini is what defines reason in the broadest terms) is a necessary component of our existence. And where Systems 1 and 2 constantly interplay.

Julian then moves to the perilous territory of “gendered” reason: what if it turns out that people of different genders think in significantly, if not radically, different ways, ways that are deeply rooted in their gender identity? Should we then not talk about reason(s), in the plural, instead of the singular term, and concept, we inherited from the Enlightenment?

He reports a strange conversation he had with the French philosopher Luce Irigaray, who has been influenced by the Lacanian school of psychotherapy, and who thinks of gender differences in a somewhat radical fashion: “When I interviewed her, I suggested that [her position] means that in a sense I was not meeting her at all, since we could not share the same understanding. She agreed. ‘In this moment we seem to be in the same place, inhabiting the same space, the same time, the same country, the same culture, the same language. In a way it is only an illusion.’”

Julian labels this an “extreme” position, “frankly not supported by the best evidence of psychology.” I’m slightly more blunt: it’s nonsense on stilts.

He elaborates along lines that seem eminently sound to me: “Feminist philosophy, for instance, is not separate from all other philosophy. A feminist critique of epistemology (theory of knowledge) has its force because it suggests there is something epistemology is missing because of distortions rooted in gender, distortions it seeks to remedy. Such a critique would lack any power if it amounted to the claim that there is male epistemology and female epistemology, and each of the two should mind their own business.” Exactly, though the latter is, indeed, the position of some radical feminists and gender studies scholars.

Baggini goes on to analyze the gender gap within the philosophical profession, ascribing it to the intellectual culture within, in terms of the assumption that discussions have to be value-neutral (while feminism, most obviously, isn’t), and especially that academic philosophy is characterized by the encouragement of a confrontational approach toward colleagues, which makes a number of women feel very uncomfortable.

All of this certainly does play a role (and indeed, I’ve seen it with my own eyes), but I would like to remind people that a comparable gender gap exists within plenty of other fields where there is no such (special) culture of confrontation, and where there are no approaches to technical matters that depart from value neutrality: mathematics, chemistry, physics and engineering come to mind. So I dispute the idea that the gender gap in philosophy is peculiar to the field, or that the profession itself should undergo some kind of radical change in order to resolve the problem. The problem is going to be resolved in the same way in which it is being addressed in other fields: by encouraging young girls to embrace areas that have been seen as traditionally “male,” on the simple ground that there is no reason at all why they shouldn’t succeed in them. And of course by an explicitly fair treatment of women undergraduate and graduate students, as well as faculty at different ranks. Something, incidentally, that philosophy as a profession is very aware of and has been implementing for years through the efforts of the American Philosophical Association.

So what does psychology tell us about human reason? Baggini suggests a revision of Plato’s famous analogy between the human mind and a chariot led by two horses: “we would do better not to think of the human soul as comprising two wildly different horses and a controlling charioteer, but as being one single equine which draws on all sorts of cognitive tools, from the conscious, systemic and deliberative to the automatic, unconscious and affective.” It’s more a mule than a thoroughbred, he says. The image may be less ennobling, but it is “better to be a many-skilled mule than one-trick pony.”

Advertisements


Categories: Book Club, Epistemology, Logic

160 replies

  1. And the fact that a position is taught in college does not imply that its claims are correct.

    On that basis we would have to believe that the competing claims of Utilitarianism and deontology are both correct.

    Liked by 1 person

  2. Massimo said:

    Setting aside that that sort of comments advances the discussion not a single iota, notoby [sic] said your position is not rational. We just said it’s wrong. There is a difference.

    Indeed. This is the basics of the classical syllogism. Strength of reasoning and truth of warrants are two entirely different things.

    All humans are actually shape-shifting aliens controlled by Alex Jones.
    Socrates is human.
    Socrates is actually a shape-shifting alien controlled by Alex Jones.

    Perfectly rational. And, perfectly wrong.

    Like

  3. Robin,

    “And the fact that a position is taught in college does not imply that its claims are correct.

    On that basis we would have to believe that the competing claims of Utilitarianism and deontology are both correct.”

    Two good examples of bizarro reasoning. First, nobody has said that just because something is taught in college then it is true. We have simply argued that it is prima facie unlikely that a dilettante would get something fundamentally right that every professional has gotten fundamentally wrong.

    Second, the term “true” or “correct” simply does not apply to ethical frameworks, it’s a category mistake. They can be coherent, useful, etc., but not true. And both utilitarianism and deontology are coherent and useful.

    Liked by 1 person

  4. Hi Massimo

    by definition; you get that one kind of rational ethics is one based on the idea that society is better off if we maximize happiness and reduce pain. Where on earth is the emotional part in this?

    That just tells you what makes society better off. It still leaves out the crucial step as to why the utilitarian takes the decision to make society better off, rather than, say, pursue his own self interest.

    Until that question is answered the only plausible motive for this is still that they feel that it would be good for society to be better off.

    Like

  5. Robin,

    At this point I have to conclude that you have a poor conception of human reason. There are all sorts of things I do not because I feel like doing them, but because I think they are right, or good, or useful.

    In a sense, you are making one of the two mistakes Baggini’s book is about: hyper-rationalism on one side, denial of a fundamental role to reason on the other side.

    Liked by 1 person

  6. Robin and Coel:

    What exactly is the point of this conversation? You clearly do not respect the fact that Massimo and I know our own subject enough to get the most basic facts about one of the most basic ethical theories right. So, why continue to talk to us?

    Instead, I suggest that you jointly write letters to the Presidents of the world’s colleges and universities and explain to them that they should no longer offer courses in Ethics taught by philosophers, because they don’t know their own subject. After all, every philosopher credentialed to teach philosophy is going to draw the same distinction between Emotivism and Consequentialism that Massimo and I have.

    You should also contact all the nursing and other programs that require bioethics/medical ethics for professional certification and tell them the same thing. Those philosopher guys/gals don’t know their subject! Hire someone else to teach ethics! Or stop teaching ethics altogether.

    Let us know how it goes. But until then, why continue tormenting us, the silent readers of this blog, and yourselves? No one is learning anything from this conversation. Quite the opposite, in fact.

    Like

  7. Massimo,

    At this point I have to conclude that you have a poor conception of human reason. There are all sorts of things I do not because I feel like doing them, but because I think they are right, or good, or useful.

    Which of us can claim to have a good conception of human reason? Human reason has in common with quantum physics in that the people who claim to understand it probably don’t.

    But you are still leaving out the crucial step. If I do the right, good or useful thing and ask myself why I did that the right, good or useful thing rather than, say, pursue my own self interest I cannot find any objective reason why I should. If I cannot find any objective reason why I did these these things then it was either arbitrary or it was because I felt that I should do the right, good or useful thing.

    Can you give an objective reason why you should do what you thought was the right, good or useful thing? If there was an objective reason then you would know it. If you don’t know it then it was either arbitrary or based on emotion. If it is based on emotion then it is better to put those emotional commitments up front.

    In a sense, you are making one of the two mistakes Baggini’s book is about: hyper-rationalism on one side, denial of a fundamental role to reason on the other side.

    Help me out here. Which thing that I said implies I am making those mistakes?

    I think the mistake you are making is the assumption that if I don’t agree with Baggini it can only be because I have not understood what he says.

    Like

  8. Even if there were some procedure that gives us the objectively right answer to moral questions, one thing I can be sure of is that I don’t know that procedure.

    So it follows that I can’t be using that procedure and so my moral decisions must either be arbitrary or based on emotions. Decisions based on emotions are at least better than arbitrary decisions. I can’t not make moral decisions so until I find that procedure the only kind of moral reasoning I can pursue is one that is based on emotions.

    If anyone else was using this procedure then they would know what this procedure is and would be able to lay it out so that we could all know it.

    But no one seems to be able to do this and so it seems that no one else has a procedure that gives objectively right answers to moral questions.

    So we are left with the inescapable conclusion that even if there is a procedure for getting the objectively right answer to moral decisions, everyone’s moral decisions are also either arbitrary or based on emotions.

    In this case the only non-rational approach is to be in denial about the role emotion plays in our decisions.

    The claim that moral reasoning about decisions based on emotions is not rational doesn’t bear water.

    The first point is that there are objectively true facts about what I want and don’t want and so we can reason with those objectively true facts.

    I can adopt one axiom “If x is the thing I most want to do or the thing among unavoidable options that is the least abhorrent to me and I cannot discover any objective reason why I should not do x then I will do x”

    That seems perfectly reasonable to me, but if it does not seem reasonable to anyone else then it does not mean that it cannot be used as part of a rational process.

    So if I have a valid argument using this axiom and objectively true premises then that is a rational argument, pretty much by definition.

    This does not preclude that it is oftent he case that the thing I most want to do is to benefit society but does not impose the unrealistic condition that I will always want to do it.

    If you are going to claim that this valid argument based on an axiom and objectively true premises is not a rational process then you cannot claim that the utilitarian who has a valid argument based on an axiom and objectively true premises is using a rational process.

    All these elements are the same in both cases and so if one is rational, then so is the other. If one is not rational, then neither is the other. You can’t have it both ways.

    And, as I have been pointing out, my approach does not have the problems that Baggini describes and does not require us to modify our conception of rationality.

    Like

  9. If a moral philosophy makes sense to someone but they can’t prove every last aspect of it, isn’t this comparable to believing in the existence of the external world, or the existence of other minds? You can’t prove everything, but I don’t see why anyone would say I “feel” the external world is real rather than “think” it.

    Liked by 1 person

  10. Good and bad are not based on emotion. If I die, that is truly bad for me, or at least those counting on me. It is relative, to me. To someone who doesn’t know me, it would be irrelevant.

    Emotion is our selves sorting through all the competing elements attracting and repelling us.

    It is bottom up. Good and bad are not a cosmic duel between righteousness and evil, but the basic binary of attraction to the beneficial and repulsion of the detrimental. Given we consist of enormous amounts of cells, organs, connections, relationships, internally and externally, all those competing impulses are shades of that essential binary of attraction and repulsion.

    Something isn’t good because we decide to like it. We like it because we are attracted to it, as opposed to repelled, or neutral to it.

    I realize I’ve said all this before, but it doesn’t get much reply. Though it might be useful to unlocking the conflict between utilitarianism and emotivism.

    Bunsen,

    Yes, there is an essential dichotomy of eastern and western thought. In a nutshell, eastern tends to be context oriented and western is object oriented. For example, some years ago, I read of a basic matching test, that compared objects and contexts. For example, there would be birds, cows, grasses and trees and people raised in the different cultures were supposed to match them up. The westerners tended to match the birds with the cows and the grasses with the trees, because they were similarly fauna and flora. While the easterners tended to match the birds with the trees and the cows with the grass, because that was environmentally contextual.
    Eastern religions tend to be more journey, than destination. It’s more about The Way, than all meeting in heaven. The idea of monotheism is distinctly western, as an ideal of (human) perfection. The yin and yang are not exactly monolithic.
    The basic concept of time is reversed. Westerners tend to see the future as in front of the observer and the past behind, because we see ourselves as distinct individuals, moving through our context. While the eastern view is the past is in front and the future behind, because we know the past and see what is in front, while the future and what is behind are unknown. This is more contextual, in that we do see events after they occur and this information flows by us.
    Safe to say the western view is more politically effective in the short term. Though the flip side will come round eventually.

    Like

%d bloggers like this: