Progress in Philosophy — IV

philosophy[for a brief explanation of this ongoing series, as well as a full table of contents, go here]

Ethics: the utilitarian-consequentialist landscape

It should be clear at this point that we could multiply the examples in this chapter by orders of magnitude, and cover — I suspect — most areas of philosophical scholarship. Instead, let me simply add one more class of examples, from ethics, focusing in particular on utilitarianism and the broader class of ethical theories to which it belongs, consequentialism. [7] The history of utilitarianism is yet another good example of progress in philosophy, with specific regard to the subfield of moral philosophy — and I say this as someone who is not particularly sympathetic to utilitarianism. The approach is characterized by the idea that what matters in ethics are the consequences of actions (hence its tight connection with the broader framework of consequentialism). The idea can be found in embryonic forms even earlier than the classic contributions by Jeremy Bentham and John Stuart Mill. For instance, Driver (2009) begins her survey with the theologically inclined 18th century British moralists, such as Richard Cumberland and John Gay, who linked the idea that promoting human happiness is the goal of ethics to the notion that God wants humans to be happy. This coincidence of our own desire for happiness and God’s plan for our happiness, however, provides a picture of utilitarianism that is too obviously and uncomfortably rooted in theology, and moreover where it is not at all clear what (philosophical) work God’s will actually does for the utilitarian.

The decisive move away from theological groundings and into natural philosophy was the result of the writings of people like Shaftesbury, Francis Hutcheson and David Hume. Shaftesbury proposed that we have an innate sense of moral judgment, although Hume did not interpret this in a realist sense (e.g., he did not think that moral right and wrong are objective features of the world, independent of human judgment). One can think of Shaftesbury as a “proto” utilitarian, since sometimes it is difficult to distinguish utilitarian from egoistic arguments in his writings, as argued by Driver (2009). The move to a more clearly utilitarian position is already found in Hutcheson’s An Inquiry Concerning Moral Good and Evil (1738), where he wrote: “so that that action is best, which procures the greatest happiness for the greatest numbers; and that worst, which, in like manner, occasions misery” (R, 283-4). Even in Hutcheson, though, we still don’t see a completely formed utilitarian approach to ethics, as he mixes in foreign elements, for instance when he argues that the dignity or “moral importance” of certain individuals may outweigh simple numbers of people affected by a given moral judgment.

Following Driver’s capsule history, proto-utilitarians were succeeded by the major modern founders of this way of looking at ethics: Jeremy Bentham and John Stuart Mill. Interestingly — and although much discussion and progress on utilitarianism has focused on Mill — the contrast between he and Bentham also highlights the difference between two branches in conceptual space: egoistic utilitarianism (Bentham) and so-called altruistic utilitarianism (Mill).

Bentham was influenced by both Thomas Hobbes and David Hume. He got his theory of psychological egoism from the former and the idea of social utility from the latter, but the two were otherwise incompatible: it is hard to imagine an egoist who agrees to the notion of social utility above and beyond what is useful for himself. Hobbes was aware of this problem in his approach, though his attempts to deal with it were less than satisfactory. For instance, he thought that a reconciliation between the two perspectives could arrive by way of empirical investigation, if the latter showed a congruence between personal and social welfare. But that is no principled way to resolve the issue, as one still has to decide which branch of the fork to take in case the empirical evidence is not congruent. Probably as a result of this sort of difficulties, Bentham simply decided to abandon his commitment to psychological egoism and a fully Hobbesian view of human nature in favor of a more moderate, more Humean, take. Hume, in turn, was no utilitarian, as he thought that character was the salient focus when it comes to ethical judgment. But Hume also wrote about utility as the measure of virtue, and that is what Bentham adopted from him, particularly because Bentham was interested in distinguishing between good and bad legislation (respectively characterized by positive and negative consequences in terms of social utility).

Driver (2009) highlights Bentham’s discussion of homosexuality, in which he explains that having an antipathy for an act is simply not sufficient to justify legislation against it. The quote is remarkably modern, reminding me of recent social psychology results like those discussed by Jonathan Haidt (2012) to the effect that people have a tendency to confuse a sense of disgust for a well founded moral judgment: “The circumstances from which this antipathy may have taken its rise may be worth enquiring to … One is the physical antipathy to the offense … The act is to the highest degree odious and disgusting, that is, not to the man who does it, for he does it only because it gives him pleasure, but to one who thinks of it. Be it so, but what is that to him?” (Bentham 1978, 4, 94). The bottom line, for Bentham, is that actions are not intrinsically good or bad, but only good or bad in proportion to their consequences in terms of social utility. Not only this disqualifies obviously suspect candidates as sources of moral evaluation, like whether an act is natural or not; it also means that values such as personal autonomy and liberty are only instrumentally, not fundamentally, good (i.e., they can be overridden, if need be).

The first major move away from Bentham’s starting point in exploring the utilitarian landscape was Mill’s rejection of the idea that differences between pleasures are only quantitative, not qualitative. That position had opened Bentham to a number of objections, including that sentient animals would therefore acquire the same moral status as humans, and the observation that Bentham had no way to discriminate between what Mill eventually referred to as “lower” and “higher” pleasures: drinking a beer while watching the World Cup should, for Bentham, be the same as listening to Beethoven. [8] Mill’s famous defense of the distinction between higher and lower pleasures is itself open to criticism, hinging as it does on the problematic idea that people who are capable of enjoying both types of pleasure are best suited to make judgments about it. As he famously put it: “It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied. And if the fool, or the pig, are of a different opinion, it is because they only know their own side of the question” (1861, ch. 2).

As noted above, though, the major difference between Mill and Bentham lies in their respective views of human nature, where Mill’s is more positive, including the idea that a sense of justice, for instance, is a natural human impulse, which we can then refine and expand by way of reason — in this Mill was very much aligned with Hume (Gill 2000), and arguably influenced even by the ancient Stoics (Inwood 2003). However, Bentham’s and Mill’s ways of looking at utilitarianism also had quite a bit in common. For instance neither of them was intrinsically opposed to the idea of rights, although for a utilitarian any talk of rights has to be couched in terms of utility, as there is no such thing as a natural right (a concept that Bentham famously dismissed as “nonsense on stilts”).

The next majors leap in utilitarian conceptual landscape was made by Henry Sidgwick, with his The Methods of Ethics (1874). Sidgwick noticed a major ambiguity at the heart of early utilitarian philosophy: “if we foresee as possible that an increase in numbers will be accompanied by a decrease in average happiness or vice versa, a point arises which has not only never been formally noticed, but which seems to have been substantially overlooked by many Utilitarians. For if we take Utilitarianism to prescribe, as the ultimate end of action, happiness on the whole, and not any individual’s happiness, unless considered as an element of the whole, it would follow that, if the additional population enjoy on the whole positive happiness, we ought to weigh the amount of happiness gained by the extra number against the amount lost by the remainder” (1874, 415). In other words, utilitarians need to distinguish between the average degree of happiness in the population and the sheer number of individuals enjoying that degree of happiness. If the goal is to increase happiness tout court, then this can be accomplished either by increasing population size while keeping average happiness constant (logistical issues aside, of course), or by keeping the population constant and increasing the average happiness of individuals. So the quantity that utilitarians really need to focus on is the product of population size and average happiness.

By the turn of the 20th century yet another refinement in conceptual space was made to basic utilitarian doctrines, chiefly by G.E. Moore. Like Mill before, he realized that Bentham’s original views did not discriminate between types of pleasures, some of which ought to be regarded as positively unethical. Bentham had no principled way of discounting the pleasure felt by sadists, for instance, as long as it somehow outweighs the pain they inflict on their victims. Moore then developed a pluralist (as opposed to a monist) doctrine of intrinsic value. The good cannot be reduced simply to pleasure, as it comes in a variety of forms. For Moore, beauty is also an intrinsic good, a position that led him to excuse on ethical grounds cases in which artists pursue their muse while at the same time abandoning their duties to their family (e.g., Gauguin), as long as the result of such tradeoff is more beauty in the world.

Moore’s (admittedly a bit vague) idea of the “organic unity” of value also helped utilitarians improve their framework by pre-empting a number of objections that had been raised in the meantime. The concept of organic unity is drawn from an analogy with the human body, where it makes little sense to talk about the value of individual organs, adding them up to get the total value of the whole body. Rather, one needs to take into account how the parts fit into the whole. Similarly, according to Moore, experiencing beauty has value in itself, and that value is augmented if the beautiful object actually exists. But the combination of these two elements is much more than the simple addition of the two parts, a position that allows Moore and fellow utilitarians to conclude that happiness based on knowledge is much better than happiness based on delusions. Again, notice the struggle to recover Mill’s intuition that not all pleasures are created equal, and to arrive at a rationally defensible view of why, exactly, that is the case.

Once we get firmly into the 20th century the history (and progress in conceptual space) of utilitarianism coincides with the broader history of consequentialism (Sinnott-Armstrong 2006). From this perspective, classic utilitarianism can be seen as a type of act consequentialism, where the focus is on the rightness of acts in terms of increasing the good (say, happiness), as their consequence. Modern consequentialism is also an improvement on classic utilitarianism because it parses out a number of positions more or less implicitly accepted by early utilitarians, positions that carry distinct implications for one’s general ethical framework. For example, there is a difference between actual and direct consequentialism — in the first case what matters are the actual consequences of a given action (not the foreseeable ones), while in the second case what counts are the consequences of the focal act itself (not one’s motives for carrying out the action). Or take the distinctions among maximizing, total and universal consequentialism, where the moral good of an action depends respectively on the best consequences, on their total net effect, or on their effect on all sentient beings. The issue is not that these (and other) utilitarian positions are necessarily contradictory, but that each needed to be unpacked and explored independently of the others, to arrive at a more fine grained picture of the consequentialist landscape as a whole.

One specific example of improvement on a thorny issue for early utilitarians is the problem posed by hedonism. [9] I have mentioned that Bentham could not discriminate between what most people would recognize as morally good pleasures and those of a sadist, and both Mill’s and Moore’s attempts to improve on the problem only went so far. Nozick (1974) took a further step forward with his experience machine thought experiment (famously re-imagined in the movie The Matrix). The idea is to consider a hypothetical machine that is capable of mimicking the feeling of real experiences in all respects, so that one could live as “happy” and “successful” a life as conceivable. Yet one would not be living a real life as commonly understood. Nozick’s contention was that it does not seem at all irrational for someone to refuse to be hooked to the experience machine, thus creating a significant problem for a purely hedonistic view of utilitarianism, necessitating its abandonment or radical rethinking. One way modern consequentialists (e.g., Chang 1997; Railton 2003) have attempted to tackle this issue is through the recognition of the incommensurability of certain moral values, and hence the in principle impossibility to resolve certain ethical dilemmas, which in turns leads to a (non-arbitrary) type of utilitarian pluralism.

Another standard problem for utilitarianism, suitable to illustrate how philosophers recognize and tackle issues, is of an epistemic nature: when we focus on the consequences of morally salient actions, are we considering actual or expected consequences? The obvious limitation plaguing classic utilitarianism — as was noted by Mill himself — is that it seems to require an epistemically prohibitive task, that of calculating all possible ramifications of a given action before arriving at a consequentialist judgment. One option here is to say that the utility principle is a criterion to decide what is right, but that it does not amount to a decision making algorithm. While this may make it sound like utilitarians cornered themselves into a self-refuting, or at the least, morally skeptical position, this is not necessarily the case. Consider an analogy with an engineer deploying the laws of physics to draw and then build a bridge. In order to accomplish the task, the engineer needs to know enough about the laws of physics and the properties of the materials she is about to use, but it would be unreasonable to pretend omniscience about all outcomes of all potential physical interactions, however rare, between the bridge and its surrounding environment — some of which interactions may actually cause the obviously unintended consequence of the collapse of the bridge. Similarly, the utilitarian can say that under most circumstances we have sufficient knowledge of human affairs to be able to predict the likely consequences of certain courses of action, and therefore to engage in the same sense of approximate, and certainly not perfect, calculation that the engineer engages in. We have good guiding principles (the laws of physics for the engineer, the utility principle for the moral person), but we face a given degree of uncertainty concerning the actual outcome.

Even so, the consequentialist is not home free yet, since a critic may make another move that manages to raise additional issues. If we shift our focus to the sort of consequences that can reasonably be contemplated by the limited epistemic means available to human beings, are we then talking about foreseen or foreseeable consequences? There is a distinction there, but it isn’t clear the extent to which it is sharp enough to be morally salient. Generally speaking the range of foreseeable consequences of a given action is broader — sometimes much broader — than the range of consequences actually foreseen by any given agent. Consider an analogy with chess playing: the gap between foreseen and foreseeable may be narrow in the case of a Grand Master, but huge in the case of a novice. The analogy, however, points toward the fact that — rather than being a problem for consequentialism per se — the distinction between foreseen and foreseeable consequences suggests that we as a society should engage in better ethical training of our citizens, just like better training is at least part of what makes the difference between a Grand Master (or even a half decent player) and a novice.

As I mentioned before, although Mill talked about rights, the concept poses significant and well known issues for consequentialists, as illustrated by the famous problem of the emergency room. The human body has five vital organs (brain, heart, kidneys, liver, and lungs). Imagine you are a doctor in an emergency room and you are brought four patients, respectively whose heart, kidneys, liver and lungs are all failing (there would be nothing you could do about a patient with a failing brain, so we won’t consider that situation). On utilitarian grounds, why would it not be acceptable for you to go outside, pluck a healthy person at random from the sidewalk, and extract four of his vital organs to distribute among your patients? Prima facie you are saving four lives and losing one, so the utility calculus is on your side. [10]

Utilitarians here have a number of options available as countermoves in logical space. The simplest one is to bite the bullet and acknowledge that it would, in fact, be right to cut up the innocent bystander in order to gain access to his vital organs. A rational defense of this position, while at the same time acknowledging that most people would recoil in horror from considering that course of action, is that the concocted example is extreme, and that our moral intuitions have evolved (biologically or culturally) to deal with common occurrences, not with diabolical thought experiments. Few utilitarians, however, have the stomach to go that route, thankfully. An alternative move is to agree that the doctor ought not to go around hunting for vital organs by drawing a distinction between killing and dying, where the first is morally worse than the second. The doctor would be killing an innocent person by engaging in his quest, while the four (also innocent) patients would die — but not be killed — by his inaction, forced upon him by the lack of acceptable alternatives. A third available move is to introduce the concept of the agent-relativity of moral judgment. The idea is that we can see things either from the point of view of a dispassionate observer or from that of the moral agent (here, the doctor), and the two don’t need to agree. In the specific case, the observer may determine that a world in which the doctor cuts up an innocent to extract his vital organs is better — utility-wise — than a world in which the doctor does not act and lets his patients die. But the doctor may justifiably say that he also has to take into account the consequences for him of whatever course of action, for instance the fact that he will have to live with the guilt of having killed a bystander if he goes through with the nasty business. The world would therefore be better or worse depending on which perspective, the observer’s or the agent’s, one is considering, without this implying a death blow — so to speak — to the whole idea of consequentialism.

One more significant branching in conceptual space for consequentialism is represented by the distinction between direct and indirect varieties of it, where a direct consequentialist thinks that the morality of X depends on the consequences of X, while an indirect consequentialist thinks that it depends on consequences of something removed from X. There are several sub-types of both positions. Considering indirect consequentialism, for instance, this can be about motives, rules, or virtues. Indirect rule consequentialism is probably one of the most common stances, holding that the moral salience of an act depends on the consequences of whatever rule from the implementation of which the act originated. At this point, though, if you suspect that at the least some types of indirect consequentialism begin to look less like consequentialism and more like one of its major opponents in the arena of ethical frameworks (i.e., rules consequentialism approaches deontology, while virtue consequentialism approximates virtue ethics) you might be onto something.

Yet another popular criticism of generalized utilitarianism is that it seems to be excessively ethically demanding of moral agents. Peter Singer’s (1997) famous drowning child thought experiment (as you might have noticed by now, many thought experiments concerned with utilitarianism tend toward the gruesome) makes the situation very clear. Singer invites us to consider seeing a child who is about to drown when we have the ability to save him. To do so, however, we would have to get into the water without delay, thus ruining our brand new Italian leather shoes. Clearly, I would hope, most people would say damn the shoes and save the child. But if so, points out Singer, why don’t we do the analogous thing all the time? We could easily forgo our next pair of shoes (or movie tickets, or dinner out, or whatever) and instead donate money that would save a child’s life on the other side of the planet. Indeed, Singer himself is famous for putting his money where his mouth is, so to speak, and donate a substantial portion of his income to charitable causes (Singer 2013). The problem is that, at least for most of us, this utilitarian demand seems excessive, confusing what is morally required with what may be morally desirable but optional. Can utilitarians somehow avoid the seemingly unavoidable requirement of their doctrine to expand far beyond what seems like a reasonable call of duty for the typical moral agent? If not, they would essentially be saying that most of what we do everyday is, in terms of utility, downright morally wrong — not exactly the best way to win sympathizers for your ethical framework.

Once again, several alternatives are available in conceptual space, and we have already encountered a number of the necessary tools to pursue them. One goes back to Mill himself, who argued that it may be too costly to punish people who do not agree to Singer-style demands posed upon them, in which case utility would be maximized by not imposing that kind of burden on moral agents. Or one may invoke agent-relative consequentialism, granting that the agent’s and a neutral observer’s perspective are sufficiently different to allow the agent a way out of the most stringent constraints. My favorite among the available offerings is something called satisficing consequentialism, which maintains that utility cannot always be maximized, so that it is morally sufficient to generate “enough” utility. This may sound like an easy way out for the consequentialist, but it actually has a parallel with the process of natural selection in evolutionary biology. A common misconception of natural selection is that it is an optimizing process, i.e. that it always increases the fitness of the organism to its maximum possible value. But both empirical research and theoretical modeling (e.g., Ward 1992) actually show that natural selection is rather a satisficing mechanism: it produces organisms whose degree of adaptation to their environment is “good enough” for their survival and reproduction. The reason for this is analogous to the one that motivates satisficing consequentialism: to go beyond good enough would be too costly, and in fact would end up not maximizing fitness after all.

The sort of examples we have briefly examined in this section could easily be multiplied endlessly, even branching into other ethical frameworks (e.g., evolution of and progress in virtue ethics, or deontology), as well as to entirely different areas of philosophical inquiry (metaphysics, aesthetics, philosophy of mind, and so forth). But I hope that the general point has been made sufficiently clearly. Even so, the reader may also suspect that some of this back-and-forth in conceptual space may, in the end, be rather pointless (I discussed this briefly in the Introduction, in the specific case of “Gettierology”). And some (maybe even a good amount) of it probably is. But let me explain and expand on this a bit, by way of concluding this chapter with a commentary on Dan Dennett’s (2014) distinction between chess and chmess, and why it pertains to the subject matter of this entire book.

Notes

[7] For a general framework comparing the major ethical theories, thus better situating utilitarianism, see here (accessed on 19 November 2015).

[8] Of course, one could simply bite the bullet on this one. But I’m more sympathetic to Mill’s attempt, if not necessarily to the specific way he went about it.

[9] Note that I am not actually attempting to adjudicate the soundness of any of the above moves over any of their rivals. As I said, I do not actually buy into a consequentialist ethical framework (my preference goes to virtue ethics). The point is simply that modern utilitarianism is better (i.e., it has made progress) because of this ongoing back and forth with its critics, which has led utilitarians to constantly refine their positions, and in some cases to abandon some aspects of their doctrine.

References

Bartels, D.M. and Pizarro, D.A. (2011) The mismeasure of morals: antisocial personality traits predict utilitarian responses to moral dilemmas. Cognition 121:154-161.

Bentham, J. (1978). Offences Against Oneself. L. Compton (ed.), The Journal of Homosexuality 3:389-406 and 4:91-107.

Chang, R. (1997) Incommensurability, Incomparability, and Practical Reason. Harvard University Press.

Dennett, D. (2014) Intuition Pumps And Other Tools for Thinking. W. W. Norton.

Driver, J. (2009) The history of utilitarianism. Stanford Encyclopedia of Philosophy (accessed on 26 June 2012).

Gill, M. (2000) Hume’s Progressive View of Human Nature. Hume Studies 26:87-108.

Haidt, J. (2012) The Righteous Mind: Why Good People Are Divided by Politics and Religion. Pantheon.

Hutcheson, F. (1738) An Inquiry Concerning Moral Good and Evil (accessed on 7 May 2014).

Kahane, G., Everett, J.A.C., Earp, B.D., Farias, M. and Savulescu, J. (2015) ‘Utilitarian’ judgments in sacrificial moral dilemmas do not reflect impartial concern for the greater good. Cognition 134:193–209.

Inwood, B. (editor) (2003) The Cambridge Companion to the Stoics. Cambridge University Press.

Mill, J.S. (1861) Utilitarianism. R. Crisp (ed.), Oxford University Press, 1998.

Nozick, R. (1974) Anarchy, State, and Utopia. Basic Books.

Railton, P. (2003) Facts, Values, and Norms: Essays toward a Morality of Consequence. Cambridge University Press.

Sidgwick, H. (1874) The Methods of Ethics (accessed on 9 May 2014).

Singer, P. (1997) The Drowning Child and the Expanding Circle. New Internationalist (accessed on 9 May 2014).

Singer, P. (2013) The why and how of effective altruism. TED Talk (accessed on 9 May 2014).

Sinnott-Armstrong, W. (2006) Consequentialism. Stanford Encyclopedia of Philosophy (accessed on 13 January 2010).

Ward, D. (1992) The Role of Satisficing in Foraging Theory. Oikos 63:312-317.

104 thoughts on “Progress in Philosophy — IV

  1. garthdaisy

    It’s time philosophers stopped “it’s not science” as a pejorative, because philosophy is also not science. I’ll say it again, any philosophical theory ever should consider it an upgrade if it were to be granted the status of “just so story.”

    Like

  2. garthdaisy

    Massimo,

    “As for what it can tell us about normative ethics: not much. If anything,”

    Agreed. Except that it demonstrates where morality actually comes from in such a way as to make the pursuit of normative ethical theories a fools errand.

    “it is the study of psychology that is informative because it can tells us what people value and how they change behavior, when they do.”

    Phycology and evo-phych are the same thing now. Evo-psych is an addition to psychology not a replacement for it. It’s a very generalized term that simply refers to applying the evolutionary lens to psychological practice and study.

    “Singer’s book on animal liberation, for one, has been hugely influential in practice.”

    That’s Singer, not the field of philosophy. Most philosophers disagree with Singer.

    “The founding fathers of the USA wrote the Constitution on the basis of their understanding of the philosophy (and ethics and social politics) of the Enlightenment. And so on and so forth.”

    And now we have USA and Donald Trump. Thanks for the contribution to morality Locke, Hobbes, Smith.

    “But inform doesn’t mean determine”

    Agreed.

    Like

  3. Coel

    Hi garthdaisy,

    Except that it demonstrates where morality actually comes from in such a way as to make the pursuit of normative ethical theories a fools errand.

    Agreed. From a scientific and evolutionary point of view, moral realism and the quest for a normative ethical theory that can tell us what we “should do” are clearly erroneous. That is an important contribution from science to moral philosophy (it’s just a pity that too few have yet realised that).

    Liked by 3 people

  4. garthdaisy

    What Coel said. Thank you, Coel. That is the main point.

    Synred,

    “So I don’t buy this ‘just so’ story, but if it was true it would seem relevant.”

    Thank you for this. This is a good example to point out the mistake those opposed to ev-psych are generally making. As Synred said, if it were true that rape was adaptive, it would be relevant, but not in the way that most detractors of ev-psych think it would be relevant. It would not make rape morally okay or excusable. It would make it no less the heinous crime it is in today’s world. Just like the knowledge that xenophobia was adaptive does not make xenophobia morally okay, or that the knowledge that sex evolved for baby making means we should only have sex to make babies.

    It’s about recognizing the origin and anachronistic nature of our feelings and drives, so that we have a better chance of steering them in todays world. We decide what to do with these feelings in this environment that were adaptive to another environment, but you can not address racism properly unless you are aware of the fact that xenophobia is a powerful, adaptive, and anachronistic instinct.

    Correct and ample facts lead to good oughts.
    Incorrect and insufficient facts lead to bad oughts. Usually. Broken clocks are right twice a day.

    Liked by 3 people

  5. synred

    I’m not a philosopher and I say it’s not science — yet and may never be.

    Even as ‘stories’ it may not be useless. Comparative study of us with other animals might be more useful [a].

    William Proxmire once gave his Golden Fleece award to some psychologist based on a paper called something like ‘Love among the Chimpanzees’. Don’t give your papers cute names!

    http://scienceblogs.com/insolence/

    Like

  6. garthdaisy

    Coel,

    I’m still toying with the idea, thought, that the complete discrediting of the entire moral normativity project is in itself a form of moral realism, in that it is something real and concrete that we can say about morality. This is the ledge I was trying to get you to talk me down off earlier but I don’t think I was making my point well. Perhaps I am still not. But what do you think of that idea? We know this one important REAL thing about morality. That normativity is an incoherent idea.

    I know it’s probably confusing to call that a form of “moral realism” but I’m just “seeing how things hang” as they say.

    Like

  7. garthdaisy

    “It’s not science”

    Neither is philosophy. But it is more scientifically grounded than philosophy, and by that offers a better chance for consensus on ideas about morality IMO.

    Like

  8. Massimo Post author

    Socratic,

    # do you think a better-informed, better-practiced, less-scientistic psychology subfield of evolutionary biology CAN “inform” ethics, as I have stated? #

    Forgive me but I’ve lost that thread, catching up with 60-something posts when I get up in the morning (still in Rome…). Could you perhaps give me a couple of examples?

    Garth,

    # It’s time philosophers stopped “it’s not science” as a pejorative, because philosophy is also not science #

    Sorry, but that’s a non-sequitur. When philosophers say “that’s not science” they don’t mean it as a pejorative, but rather as a descriptive statement. When applied to questionable science, as in the case of evopsych, it is aimed at someone who pretends to be doing science, something that philosophers certainly don’t do.

    # Except that it demonstrates where morality actually comes from in such a way as to make the pursuit of normative ethical theories a fools errand #

    Another non sequitur, and a big one too. First off, one doesn’t need to be a moral realist in order to pursue normative ethical theorizing, just like one doesn’t have to be a Platonist to be a mathematician. Second, the origin of how we came to know x tells us relatively little, if anything, about the ontology of x. You might as well say that science cannot exist because nature has given us limited reasoning powers that are pertinent only to our survival and reproduction, not to the discovery of the ultimate theory in fundamental physics. (Which, interestingly, is exactly the reasoning that theologians like Plantinga deploy to, allegedly, defeat naturalism.)

    # Evo-psych is an addition to psychology not a replacement for it #

    True, but it presents itself as the grand unifying theory of psychology. I’m afraid it falls inordinately short of that.

    # That’s Singer, not the field of philosophy. #

    You were asking for example. Now you want universal agreement?

    # And now we have USA and Donald Trump. Thanks for the contribution to morality Locke, Hobbes, Smith #

    Boy, you are on a roll with non sequiturs today, my friend!

    Liked by 1 person

  9. Coel

    Hi garthdaisy,

    I’m still toying with the idea, thought, that the complete discrediting of the entire moral normativity project is in itself a form of moral realism, in that it is something real and concrete that we can say about morality. This is the ledge I was trying to get you to talk me down off earlier but I don’t think I was making my point well.

    There is nothing at all problematic in making objective statements about subjective matters. Tom’s liking for chocolate is subjective (being a product of his brain), but the statement: “Tom likes chocolate” is objectively true — he indeed does!

    I’m the same way, the human moral system can be entirely subjective (and moral realism false) — but the statement: “the human moral system is entirely subjective” can then be objectively true.

    Liked by 1 person

  10. SocraticGadfly

    Massimo, since I’m talking about future possibilities, I can’t give you specific examples of a for example.

    What I am saying is …

    Let’s toss current ev psych. And sociobiology before that. Let’s get a Kuhnian paradigm shift in this area launched. Let’s throw out the EEA entirely, since it’s not falsifiable. Let’s throw out “man the noble hunter gatherer,” since it’s A. Sexist and B. Was LONG preceded by “man the less noble scavenger gatherer.” Let’s underscore that in individuals, let alone species, genes are tendencies not destinies.

    Can we then reboot? Can we create a new field? With a hat tip to you, let’s call it “evolutionary developmental psychology.” Or, “eco-devo psychology” for short.

    Could this “program” actually “inform” philosophy as to the development of human ethics?

    My answer would be yes, as an interested, reasonably intellectual layperson. “Is ≠ ought” caveats still apply, of course.

    Liked by 1 person

  11. brodix

    Socratic,

    Would it be reasonable to say that morals and ethics are idealizations?

    Such that we will try to reach them and should we do so, then set the bar a little higher. For instance, in some parts of the world and in history, slavery has been considered normal practice, but there were/are those who refused to condone it, out of empathy with the victims and respect for what society should expect, set the bar above that level and through much trauma and backsliding, much of society has gone along.

    Currently we have an economic system which massively benefits a few, at the expense of the many and many people are either accepting of that for ideological reasons, or feel it is simply too difficult to change. Yet there are a fair number who insist it will change, trauma, or not. Now say humanity does develop economic models and methods which circulate value throughout the economy and society in a sustainable way; What then? Wouldn’t we continue to set the bar even a little higher? Yes, there are ideals which will never be attained, because they conflict with reality. We are not going to be sitting around the clouds, playing the harp, in this reality. Yet making those judgements would be part of the processes as well.

    My point is that instead of looking at it as a binary choice of either some as yet undiscovered model of ethics, or simply dropping the ball and saying humanity is fundamentally corrupt, we should view it as an ongoing process.

    Like

  12. Massimo Post author

    Socratic,

    I’m certainly open to further reboots of the whole evopsych-like approach. But I think there are fundamental reasons why it didn’t work as sociobiology and is not working very well as evopsych, and those reasons have to do with very real empirical limits on the test ability of their hypotheses. So, sure, it’s possible that a third iteration will work better, but I’m curious to see what they’ll do differently.

    Even so, it remains to be seen what an origin story can actually tell us that is relevant to contemporary ethics.

    Dan,

    Indeed, both Dennett and I refer to modern scholarship, much of which is garbage precisely for the reasons you highlight. I only differ from you, maybe, in thinking that this isn’t just a problem for philosophy, it holds for every discipline, including the natural sciences.

    Garth,

    As I just remarked to Dan, the problem isn’t specific with philosophy. In my days as a working scientist I’ve seen a hell of a lot of garbage turned into PhD theses and second or third rate papers. The difference is that scientific garbage costs a lot more than the philosophical variety.

    Liked by 2 people

Comments are closed.