Plato’s weekend suggestions

readingsHere it is, our regular Friday diet of suggested readings for the weekend:

So, despite one of last week’s picks, it may be too early to declare the demise of Chomsky’s theory of universal grammar.

Using analogies properly: we need to go back to Aristotle and Wittgenstein.

The dangerous non-science of Freud and Jung, seen through the eyes of Jung’s wife.

“The world,” we read in the Gnostic Gospel of Philip, “came about through a mistake.”

If best options are panpsychism and consciousness-as-illusion, I fear philosophy of mind is stuck in a rut.

Are you smart or rational? (No, the two are not the same thing.)

Bad science is the result of importing the market model into academia.

In defense of moral expertise?

172 thoughts on “Plato’s weekend suggestions

  1. synred

    Massimo:

    Moving to more complex topics, there is increasing evidence from comparative primatologist that a basic “moral instinct” (i.e., a strong internal feeling that something is “wrong”) is hard wired in some non-human species, why wouldn’t it be the case in us as well?

    I find that experiment where the monkey get’s angry when he is given cucumbers and the guy in the next cage gets grapes for the same task fascinating.

    It would be interesting to see if this could be extended to a third party. Say a well fed monkey watching the interaction. Would he take the side of the cheated money? Show empathy, say, by grooming him? Share some of his excess grapes?

    If it didn’t work for monkeys you could move up to Bonobos…

    Liked by 1 person

  2. synred

    The word is increasingly — even predominantly — used today to commit a kind of fraud, whereby one attempts to assign one’s stated desires a greater force than they really have

    Well my usual comment:

    This not new and I doubt it is more prevalent today. It might even be possible to test the hypothesis that it’s more prevalent by looking back at least as far as Guttenberg.

    Kings did it all the time as did/do politicians and parents.

    Like

  3. dbholmes

    Hi Socratic, I agree with what you had said, and “Libet-class” is fine by me. Just offering reassurance that at some point that kind of thing should end. Right now there are still defenders as you point out, but they have less and less support.

    Liked by 1 person

  4. SocraticGadfly

    Anyway, let’s take this back to the point of illustration — nerve speed. We, and other animals, have an autonomous nervous system for reaction to sudden pain, etc, because we can’t afford the time wait for signals to go all the way to the brain.

    A robot, feeling its equivalent of pain, would not have that “lag,” with electrons traveling at the speed of light, so it would naturally have a different nervous system. In turn, that would surely influence its conscious perception of pain.

    And, of course, the robotic computerized substrate is something that has been consciously engineered, not a series of kludges developed over 400 million years or more.

    This is why the likes of Dennett, claiming substrate doesn’t matter, are simply laughable.

    Substrate matters, and so does the development of that substrate.

    Liked by 2 people

  5. Thomas Jones

    Re Massimo’s comment and Dan’s follow up, and despite sharing Dan’s frustration in his enumerated points, as well as with his Didion quote earlier, Massimo’s points regarding pre and post-language seem insightful to me, especially since language seems at times a stumbling block whenever we try to capture some unique feature or characteristic that might suffice to distinguish the moral from the non-moral. Proto-morality, anyone?

    Like

  6. dbholmes

    Hi Massimo, I’m not seeing how we determine what instincts are “moral” sentiments, except by engaging in just so stories regarding when they evolved (which we don’t know) and connecting that with activities we currently desire.

    This really walks us into evo-psych territory which is problematic.

    “pre-language, whatever we felt was immoral was immoral.”

    That seems a little weak as moral usually requires some concept beyond the self and one’s own desires. But accepting what you say, then moral could/would be different from animal to animal… especially if you are arguing it is inherited and so available to alteration by genetic mutation.

    Or are you arguing that evolutionary pressures act as arbiter of morality?

    “Post-language we can articulate reasons, be challenged by others, and as a result override our gut feelings about this or that moral issue. Those gut feelings, however, are so hard to override precisely because they are rooted in a powerful biological, hard wired, instinct.”

    And this seems to reveal the whole problem. Post language we recognize and discuss interests beyond instincts as being perhaps more important than the instincts. So what do our instincts tell us about what is or should be considered moral versus immoral? Not much. As you suggest it will tell us what actions we will find easier/harder to comply with… but that does not let us know whether the choice has to do with morality at all, or given that it is what action is “right/wrong”.

    Liked by 1 person

  7. Thomas Jones

    DB, “Or are you arguing that evolutionary pressures act as arbiter of morality?” Massimo will weigh into this question, I’m sure. But I don’t think he is. It’s more about how we employ language as a concomitant factor in our development. We don’t speak dolphin, but some of those squeaks may represent an advance over a heat-butt in terms of conveying what’s acceptable.

    Like

  8. Massimo Post author

    Dan,

    I don’t “define” morality as pro-social behavior. I’m suggesting that’s where our gut feelings, and then eventually our attempts to articulate them, came from. I quite like Thomas’ suggestion of calling it proto-morality. My point was that of course the observation that some parts of human behavior pertinent to morality are hard wired in pertinent to discussions of the nature of morality.

    That said, I share your skepticism about the current use of the term. That’s why I’m ever more inclined toward virtue ethics, as ethics (or morality) in ancient times just meant the descriptive-prescriptive (the two interacted) study of how to live one’s life. And it was aimed at one’s own self improvement, not used to beat others on the head.

    Socratic,

    It is a common misconception that consequences are ignored by virtue ethicists. They aren’t. The difference between utilitarianism and virtue ethics doesn’t lie there, it lies in the focus that the latter, but not the former, has on individual character development.

    Dbholmes,

    While evolutionary just so stories make for weak science, they do still afford us decent hypothetical scenarios. Besides, how else would you explain the empirical findings I mentioned above about some surprisingly pro-social (even when it requires self sacrifice) behavior on the part of bonobos and the like?

    And no, I am most definitely not arguing that evolutionary pressures are the arbiters of morality. I am simply saying that without an evolutionary scenario one cannot make much sense of the origins of moral feelings. There is a huge difference between the proto-version of X and the highly complex, culturally developed version of the same X.

    Liked by 3 people

  9. Coel

    Dan says:

    . Even better is that one is “morally obligated” to vote for Hillary, because one has a “moral duty” to prevent Trump from getting into the White House.

    And:

    The word [moral] is increasingly — even predominantly — used today to commit a kind of fraud, whereby one attempts to assign one’s stated desires a greater force than they really have …

    Agreed, and the Trump case is a good example. Labeling something “moral” or “immoral” is — in truth — an attempt to establish or defend a societal norm, based on one’s own preference as to what sort of society one wants to live in. But the use of the term “moral” is an attempt to claim greater backing or legitimacy than that. That’s because the whole language and indeed most humans presume moral realism, so using that language attempts to co-opt objective backing for one’s own view.

    Indeed, the traditional thing to do has been to invoke a god (constructed in one’s own image) that will back up one’s own opinion [“A final neuroimaging study demonstrated a clear convergence in neural activity when reasoning about one’s own beliefs and God’s beliefs, but clear divergences when reasoning about another person’s beliefs” — Epley etal, Proc Natl Acad Sci, 2009.]

    [As I’ve said before, my pet theory is that evolution programmed us all with the illusion of moral realism precisely because this appeal to objective backing seems to strengthen the appeal, and thus makes our moral faculties do their job of facilitating cooperation more effectively.]

    Like

  10. SocraticGadfly

    Massimo, thanks; I probably should have nuanced what I was saying more. And, the idea that society has shifted some things it considers moral, or proto-moral, is compatible with either school.

    ==

    If I can rephrase Massimo’s last graf:

    “We evolved to make moral value judgments in the way that other animals did not.”

    Like

  11. synred

    Hi SG: The computer could simulate the substrate. Put in whatever delays are needed, etc. It’s not happening anytime soon.

    Like

  12. Imad Zaheer

    Sorry for the delay (weekend came and I got lazy and engaged in more fun activities) but here is the passage from Peter’s book. Re-reading it, I’m actually confused as to what exactly he means but the pages following the quote, he goes on to talk about how Parfit’s theory convinced him of moral objectivity. I’ll let you bet he judge.

    “We could attempt to separate those moral judgement that we owe to our evolutionary and cultural history from those that have a rational basis. But in what sense can a moral judgement have a rational basis anyway? In rereading my earlier text I can see how ambivalent I was about the idea of ethics being objectively true and rationally based. I wrote that reason leads to progress in morality, and I insisted that reason is not limited to the negative task of rejecting custom as a source of authority (pp. 99-100). On the contrary, I argued that reason leads to principles that “one’s own interest are one among many sets of interests, no more important than the similar interests of others” (P. 106). Moreover, I said that this truth is “eternal and universal, not dependent on the existence of human beings and other creatures with preferences,” although without such beings it would have no application. Yet I went on to say–drawing on arguments put forward by J.L. Mackie–that the idea of “objective values” or an “objective moral reality” is too “queer” and too full of problems to be used to support alternatives to the view that, given that one’s interest are no more important than the interests of others, the right thing to do is maximally to satisfy the preferences of all those affected by our actions. I therefore claimed that these alternatives–for example, the view that it is always wrong to kill an innocent person, no matter how many other innocent people may die if we refrain from killing one–should be regarded as the subjective preference of the person who holds them. If we do that, of course, they can be taken into account when we decide what will maximally satisfy the preferences of all those affected, but they are taken into account on terms set by the one who seeks to maximize the satisfaction of preferences–that is, by the preference utilitarian.

    I no longer believe that this argument succeeds…”

    Like

  13. dbholmes

    Hi Massimo, I agree that many gut feelings have become the basis for moral terms/concepts. But that does not tell me whether that language use is in fact accurate or useful, or that the gut feelings had anything to do with evolutionary hard-wiring.

    Let’s take the example of “justice” or “fairness” (as mentioned by Synred) which has been observed in some other primates. That a primate gets upset in an “unfair” situation, and can even show solidarity with another primate being treated unfairly, does not indicate “justice” or “fairness” is hardwired within primates. Maybe it is, maybe it isn’t. We don’t know, because it could simply show that with certain faculties, emotions, and environments their exists a capacity to recognize and react to “unfairness”. Capacity to do something given certain conditions is different than having a genetically hardwired “justice” detector, specifically selected for by evolution. Further, if shown to be hardwired, it can’t be known where that emerged and why it was selected, versus persisting (a spandrel). All it need do is not harm reproductive chances.

    Another example might be fear of outsiders or those that do not look like oneself (one’s tribe), or the converse, attraction to like organisms. Obviously that could be ancient, predating even mammalian ancestors. While we might spin a great story how that has reproductive advantages, were they in fact necessary? And is that moral or simply aesthetic preference? According to your comment it would seem to have been moral in our pre-linguistic ancestors, and certainly was considered moral in some of our recent language capable homo sapiens. But many now view such instincts as merely aesthetic at best, immoral (if it has a moral component) at worst.

    Self-sacrifice for offspring or close relatives (or neighbors) may be instinctual, and seems easy (like anything really) to wrap into a story of how it may have provided evolutionary benefits. But again, where did that originate? I have no idea why we would be looking at primates as the source of that gut instinct. And if advantageous in theory, was it necessary to the point of being selected for, rather than simply staying with us? Plus, there are plenty of counterexamples (lack of altruism, predation), to believe that this is something absolutely hardwired.

    Indeed, much of moral discourse is due to people not acting according to the “gut instinct” of others. Are these mutants? Does that mean moral hardware is being discontinued in our line at the genetic level? And if it is argued such exceptions are environment capable of overwriting instinct, logically why can’t it also be the reverse (other than wanting to believe we have “pure” instincts)? After all one can spin great stories of how being a predatory bastard could have advantages, even within social communities.

    “While evolutionary just so stories make for weak science, they do still afford us decent hypothetical scenarios.”

    Ok, but decent enough to tell us what about current ethical concerns, beyond why certain actions may be tougher to commit to than others? Have you ever used such a story to argue in support or against a position in ethics? It might make interesting decoration, but I can’t see it bringing much support to a position.

    Like

  14. synred

    We would still be simulating in our reality. It would establish the principle that consciousness is not dependent on the ‘physical’ substrate.

    It would not mean we understand how consciousness comes about. There might be biological process we are unaware of the we are ‘blindly’ simulating. E.g., if we code in the chemistry and let it evolve. The creatures in the simulation would have the ‘am I a simulation problem’ but as it would not be a p-simulation they might be able to figure it out. On the other hand if the glitches (a.k.a., miracles) are rare enough, it’ll be hard for them to know if they’re ‘real’, if they have there own wise-guy Hume).

    In the end it’s just a techie version of the BIV.

    Such a simulation is not going to be practical soon, perhaps ever. I suspect never.

    –>Turtles all the way down. Mu.

    I have a draft of a story about this called ‘Causality – a fable of many worlds’ here:

    Story Land


    This story was inspired a PF thread a couple of months back. It even has sex of a sort.

    It is basically un-proofed and I may never get round to proofing it as I have no outlet for these stories anyhow. I did send the very short ‘Schrodinger’s Cat and the Law’ to Asimov’s Sci Fi and it hasn’t been rejected yet — it’s status Is ‘received’ ;_(

    Like

  15. Thomas Jones

    db, “Ok, but decent enough to tell us what about current ethical concerns, beyond why certain actions may be tougher to commit to than others?” I’m having trouble with why you are having trouble with Massimo’s statement. How does one begin to even address pre-language behavior and concerns of biologically social species with “current ethical concerns”? So, is he making a case for bicameralism (which would probably be appreciated by Coel who sees theism lurking in the shadows of these discussions)? I don’t think so. And I don’t think Massimo uses language like “specifically selected for by evolution.” Another problem is the metaphoric use of words and phrases like “gut feelings” and “hardwiring.” In this case “gut feelings” seems more apt to me than “hardwiring” or programming or firmware or software.

    So, there’s an asked and answered quality to your comment when considering this statement from Massimo:

    “And no, I am most definitely not arguing that evolutionary pressures are the arbiters of morality. I am simply saying that without an evolutionary scenario one cannot make much sense of the origins of moral feelings. There is a huge difference between the proto-version of X and the highly complex, culturally developed version of the same X.”

    Like

  16. SocraticGadfly

    As a sidebar to some of this, I just got done reading this new(ish) book. I’d agree with the four-star reviewer. It’s also “interesting” to note there’s a 300 millisecond gap between sensory perception and conscious awareness of it. Libet and followers may not have had exactly the correct interpretation of their own gap on volition, but, to me, this is further signification that such gaps not only exist, they’re of significance. https://www.amazon.com/Our-Own-Image-Artificial-Intelligence/dp/1605989649/ref=sr_1_1?ie=UTF8&qid=1474841546&sr=8-1&keywords=george+zarkadakis

    Liked by 1 person

  17. synred

    We just finish this one in Martin Perl Bookclub on same general topic.

    Markoff, John. Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots . HarperCollins. Kindle Edition.

    Liked by 1 person

  18. Disagreeable Me (@Disagreeable_I)

    Hi Socratic,

    A robot, feeling its equivalent of pain, would not have that “lag,” with electrons traveling at the speed of light,

    Electrons don’t travel at the speed of light. Wires do carry signals much faster than biological nerves though.

    so it would naturally have a different nervous system.

    If it evolved naturally sure. But if it’s designed, then it has whatever nervous system we design for it. If we want to mimic the design of the human nervous system, there’s nothing stopping us. If we need delays for some reason, so that nerve impulses are as slow as they are in humans, then there’s no reason we can’t introduce delays.

    And, of course, the robotic computerized substrate is something that has been consciously engineered, not a series of kludges developed over 400 million years or more.

    But, again, there’s no reason we can’t mimic what has evolved if we want to. I’m not saying that’s necessary for consciousness — quite likely there are much better ways to make conscious robots than blindly mimicking everything nature did (for instance, I doubt that the lag in our neurons is a crucial part of our consciousness), but the point of substrate independence is that this mimicry is possible and will produce similar results.

    This is why the likes of Dennett, claiming substrate doesn’t matter, are simply laughable.

    The claim that substrate doesn’t matter is often misunderstood, and you’ve illustrated that again. Nobody is claiming for instance that we can build a conscious mind out of nothing but helium atoms. Of course substrate matters in this respect. Rather the claim is that if a material allows us to reproduce the same kind of causal network as is present in a human brain, i.e. how the brain processes information, then reproducing that causal network and information processing is sufficient for consciousness. That’s why your argument from Libet and nerve signal duration is off the mark — because there is absolutely no reason to think we couldn’t reproduce the limitations and kludges of biology in another substrate should we wish to.

    Liked by 2 people

  19. Coel

    Hi db,

    That a primate gets upset in an “unfair” situation, and can even show solidarity with another primate being treated unfairly, does not indicate “justice” or “fairness” is hardwired within primates. […] it could simply show that with certain faculties, emotions, and environments their exists a capacity to recognize and react to “unfairness”.

    Is there much difference between those? In either case ideas of justice and fairness are evolved features. (I guess you might be saying they are not “hardwired” in the sense if not being modifiable by other things, and I’d agree they aren’t, with most such traits being fairly plastic.)

    Like

Comments are closed.