Plato’s reading suggestions, episode 92

Here it is, our regular Friday diet of suggested readings for the weekend:

The five energetic epochs of evolution.

On the dark history of intelligence (though author unfortunately does not make the ancient distinction btw intelligence & wisdom).

Are right-to-try laws such an obviously good idea?

What Buddhism and meditation can and cannot do for us. (My extended commentary here.)

Consciousness evolved at least twice on planet Earth.

The obsession with measuring everything is killing academia and college education.

De-provincializing philosophy.

Neutral monism: a bizarre non-solution to the mind-body non-problem.

How to be better at stress: an in-depth very practical guide.

Richard Dawkins on his new book, and Donald Trump.

_____

Please notice that the duration of the comments window is three days (including publication day), and that comments are moderated for relevance (to the post one is allegedly commenting on), redundancy (not good), and tone (constructive is what we aim for). This applies to both the suggested readings and the regular posts. Thanks!

Advertisements


Categories: Plato's Suggestions

89 replies

  1. Brodix:

    I have not been able to resist conflating the essays on intelligence and Buddhism, because both deal in some way with our concept of reality. Intelligence is, or should be, the capacity to distinguish what is real from what is not, to expose the possible falseness of our perceptions and make us, in consequence, more confident in what we come to regard as true. Buddhism also emphasizes the need to recognize and confront the illusions of everyday life.

    In both cases, I think the “reality” concept is totally conditional on filtration through a human point of view, and our effort to make sense about the world as we know it, and not about some hypothetical Gods-eye view of ABSOLUTE REALITY. In this sense, the Buddhist conception of the world as illusory is not incompatible with the findings of science, which continues to do a remarkable job of defining reality within the limits of our vast and complex human culture. From a somewhat more radical perspective, a quarter of a million years since the emergence of homo sapiens, we are currently experiencing a situation where the leader of the most powerful nation in history is contemplating some sort of nuclear confrontation with North Korea; a significant contrast with several billion years of prehistory when evolution and other events occurred without a word of advice from us. 


    Like

  2. DM wrote:

    But in certain circumstances, it is reasonable to infer Behaviour -> Internal States

    = = =

    By itself? I don’t think so. With all the other background information I mentioned, not to mention analogy? Sure. But no one disagrees with that.

    Like

  3. wtc,

    I just read the Buddhist article and to be the total contrarian, what if, instead of distilling it down into some scientistically approved secularism, we consider the metaphysics being dismissed. It’s not as though science has fully quantified life. What, for instance, would be the scientistically approved quantification of love?Chemistry?

    Consider smacking your finger with a hammer; Safe to say, the internet pipes between those finger nerve endings and your brain function quite well, but what is their nature? What are all those mental modules through which we experience life? Are consciousness and thought separate? It would seem so, given that we can quickly shift from one thought module to another. Yet, on the other hand, is it possible to be conscious, without being conscious of something? Sound of one hand clapping? Even meditation isn’t about blocking out the world, as just not focusing on any particular aspect of it.

    Now what if there was wireless communication? Obviously we can see information carried by light, but how deep does that go? Do you ever have the sense of looking at someone else and get a sense of their view of you, sort of like looking in a mirror? How different, i.e. information dense, is that from the signals our brain normally processes of other’s thoughts, like attraction, suspicion, distraction, etc. How about being in a group of people and sensing the shifting layers of their various interactions, such as when another person walks up?

    It is like our consciousness is looking through filters, at other filters. Consider that light shining through a red filter is red light. As such, our consciousness is both the light and the red. Similarly, we are all those thoughts and desires we experience.

    Now, what if, on an even deeper level, we are, as individuals, filters for an even more elemental sense of sentience? Being people; bifocal, linear, computing creatures, each of us is likely focusing that sense very intently on whatever has our attention of the moment, thus seemingly separate and atomistically isolated.

    Are more basic hive creatures, such as bees, quite so isolated in their sense of self, or are they more likely to be more open to the sense of being part of some broader sense of organism? Much as our many impulses and modules inhabit our personal sense of self.

    So… getting back to the metaphysics, what about re-encarnation? Maybe not as specific individuals, but in a broader sense of the organism renewing itself?

    Might go some ways toward explaining love.

    Massimo,

    The threading seems useful, but what about replying to replies?

    Like

  4. Hi Dan,

    I’m happy to have you change what you’ve said

    That’s fair. I accept I wrote imprecisely.

    As for being “validated” by Massimo’s characterization of Functionalism, it’s no different from the one I gave you.

    I agree. I was not especially validated by Massimo’s characterization of functionalism. I knew what functionalism was before you told me and I knew what functionalism was before Massimo told me.

    Dennett is wrong about heaps and heaps of things. (Most things in philosophy, in fact.) If he said what Robin said, then he is wrong about what functionalism is. Or is being quick and sloppy.

    Robin didn’t quote Dennett. He gave one of Dennett’s attitudes, so this is neither Dennett being quick and sloppy nor Dennett being wrong about what functionalism is.

    Dennett and I agree that if something behaves very like it is conscious then it is (almost certain to be) conscious. This is because Dennett and I believe it is hopelessly unrealistic to expect a system to pass something like a robust Turing Test without having all the internal functional states required for consicousness. Behaviour in the real world is an extremely good indicator of consicousness for this reason. It isn’t behaviourism because we can at least acknowledge the possibility in principle that this heuristic could fail (e.g. something that passes the Turing Test with some sort of trick like using a giant lookup table to know what to do). Although for practical purposes functionalism and behaviourism will agree on what is conscious, we disagree about what it is that is necessary and sufficient for consciousness.

    Like

  5. Hi Bunsen Burner

    You can see how weird the idea is by remembering that you can replace the computer by a human with paper an pen. If a formal model of consciousness capable of being evaluated by a computer can exist then you should be capable of evaluating it with a pen and paper. So when does this consciousness get instantiated? When the pencil is up or down? What happens to the consciousness if you take a bathroom break? Is its experience affected by the speed of your writing?

    Yes. Thanks I have been making the same point about computations being replaceable with oencil and paper calculations for years, here and elsewhere. I don’t see how it isn’t a knock down refutation of comoutationalism.

    I have found a lot of people just bite the bullet and say that such a computation would have a conscious experience just like we do.

    Like

  6. And, most of what Dennett is right about was actually said by Ryle first!

    Like

  7. DM,

    Functionalism is a statement about internal mechanisms. If you infer consciousness from external behavior, that’s behaviorism, by definition.

    Brodix,

    I tried more than 2-level threading in the past, and it was rather messy. Let’s stick to the current format and see what happens.

    Robin, Bunsen,

    I agree with your argument, but isn’t that a version of Searle’s Chinese Room? Which, as you may recall, I find entirely convincing…

    Like

  8. Then again, let’s try 3-level threading for a bit…

    Like

  9. Massimo, trying to remember what you’ve said in the past about Searle. And getting the third level of threading started. I am not sure what, if any, school on TOM I’d assign myself to. That said, I know that functionalism is better than behaviorism if for no other reason than, contra Daniel C. Dennett, it says “substrates” matter, since the internal mechanism of brain neurons is different from that of computer circuits.

    Like

  10. The problem with replying to replies is there isn’t enough of a notification system to remind you if someone replied to you in the previous pages.

    Like

  11. Hi Massimo

    No, the argument has a different structure and different subject matter. Searle does not appeal to the equivalence of algorithms and is about understanding rather than consciousness.

    I don’t find Searle’s argument convincing.

    Still, they are along a similar theme, started by Leibniz’s argument about the greatly enlarged mechanical mind.

    Like

  12. Roughly, one goes “if syntax ewuals semantics then the man in the CR would understand Chinese; The man in the CR doesn’t understand Chinese; therefore syntax is not equal to semantics”

    The other goes “if consciousness is algorithmic then a pencil snd paper calculation could produce conscious experiences; A pencil and papee calculation cannor produce conscious experience therefore consciousness is not algorithmic”

    Arguments that might defeat the first wouldn’t defeat the second and vice versa.

    Liked by 1 person

  13. Although it becomes more confusing to follow discussions broadly, as opposed to enhancing particular ones. The general versus specialist dichotomy. Linear and non-linear.

    Like

  14. Robin,

    Yes, the two arguments address different topics, as you say (understanding vs algorithmic nature), but the parallels are noteworthy, since in both cases the point is that the formal structure of something (brain, artificial setup) is insufficient to generate the phenomenon in question (consciousness, understanding Chinese).

    Like

  15. I also like the fact that it can be related to something that everyone has a good intuition about – reading and writing fiction. Searle’s argument requires people to use their intuitions on a large and complicated distributed system. That’s not something most people are readily acquainted with.

    Like

  16. The phrase “in certain circumstances” I think means that we’re not going on behaviour alone. If we were going on behaviour alone, then the circumstances would be irrelevant.

    For instance, if we’re talking about a physical naturally evolved creature, the inference that it’s behaviour is produced by complex internal states is entirely reasonable.

    If we’re talking instead about a hypothetical entity which is constructed solely to distinguish between behaviourism and functionalism (e.g. something that uses a lookup table to decide what to do) then such an inference would not apply.

    In the real world, I have little doubt that the only way of feasibly achieving behaviour anything like that of a human is to employ a system of internal functional states similar to those of a human, so in the real world whenever I see something that appears to be conscious (and here, I’m assuming they meet every behavioural test you could conceivably throw at it), then I infer that it is conscous, even though I am a functionalist.

    Like

  17. Hi Robin,

    I don’t see how it isn’t a knock down refutation of comoutationalism.

    I have found a lot of people just bite the bullet and say that such a computation would have a conscious experience just like we do.

    You’ve answered your own question!

    Like

  18. Hi Massimo,

    If you infer consciousness from external behavior, that’s behaviorism, by definition.

    As explained to Dan, it’s not from external behaviour alone. It’s from combining functionalism with the premise that the only way it is practical to produce very sophisticated conscious-seeming behaviour is with internal functional states like those of a human.

    This premise is independent from functionalism, I suppose, and yet I imagine that most functionalists would likely to agree with it. They distinguish themselves from behaviourists by accepting the possibility that there could be other ways to achieve the same ends (e.g. looking up what to do in a giant lookup table), but such tricks are completely unworkable in the real world. A lookup table that would consistently pass the Turing Test would have to contain more information than there is in the observable universe (I’m guessing). They also distinguish themselves from behaviourists by insisting that it is these internal functional states that cause consciousness.

    This nuanced view is I believe immune from the criticisms of behaviourism that functionalism was supposed to address, so it’s not simply “behaviorism by definition”.

    Like

  19. I have to post this, per Massimo’s retweet of it. Dawkins is either that idiotic, or that obtuse, about philosophy: https://twitter.com/RichardDawkins/status/898918753014091776

    Like

  20. Oh, and here’s the actual comment: “Philosophers happily speak of “continental philosophy.” What science department would appoint a professor to teach “continental chemistry”?”

    Like

  21. Well, I disagree entirely. I don’t think the inferences you are talking about are warranted without a lot of assumptions that you can’t ground. And when you add the computationalism to the functionalism, as most do, it gets even worse.

    Like

  22. Consciousness works both emotionally and rationally, with intuition as a seemingly intermediate state. Why these categories?

    Rationality is reductionistically linear, in that A implies B, thus building a supportable argument.

    Emotion is thermal. As we “heat up,” or “cool down,” we expand and contract. Expansion/contraction doesn’t have a focal point, except as it originates with the source, be it individual, or group. Like pictures drawn on a balloon, the areas of concentration themselves become dispersed, rather than more concentrated. Thus the tendency of the mind to go “blank,” in extreme emotional states.

    Consider the relationship to meditation: While one isn’t expanding one’s desires, it is about letting mental structures recede, potentially to disperse, rather than coalesce. Hence the observation Newton wouldn’t have considered the implications of a falling apple, if he was Buddhist.

    Consider as well the political polarities of social liberalism, versus civil and cultural conservatism, where one side is seeking to dissolve boundaries between cultures and communities, in order to expand beyond the given limits, while the other is trying to strengthen the internal structures and focus of those entities and consequently creating greater gaps between different communities.

    Even in the universe, measures of space between galaxies expand, while those within galaxies contract, resulting in the structure of mass, with the gaps between growing. Yet those expanding measures are based on the wave lengths of radiation, the essential energy.

    So rationality is about structure. The syntax of language, for example. While emotion(positive) is about expanding one’s presence. (Negative emotion tends to break down structures as well. Relationships, rage, etc.)

    Intuition is where this expansion of the mind makes connections that linear, reductionist rationality would not have made. Semantics being where this process is internalized and context is understood.

    Consequently, making a computer that can even “understand” requires a sensory feedback from an environment that is many orders of magnitude more complex than can be stored, because the environment is not just information, but energy carrying information and while the information is structure, the energy can manifest, not only a broad range of informational structures, but multiplied by how these more base level structures relate.

    While people, as organisms with a set of specific needs and goals, can react and relate to this input in ways dictated by those needs, an artificial intelligence doesn’t have that fundamental set of priorities, at its core.

    Like

  23. DM: You have no idea what “most functionalists” would agree with as, by your own admission in previous discussions, you’ve barely read any of the literature.

    Two philosophers who not only have read the literature but taught it at the college level are telling you you are wrong. They are giving you the reasons for being wrong.

    Your discussion of the machine table in this comment betrays a complete lack of familiarity with what functionalists actually have argued in the literature. (Functionalism is directly inspired by the Turing test, so it would be very weird if the chief bit of conceptual apparatus in the functionalist toolbox — i.e. the machine table — is inconsistent with Turing.) Ditto for your characterization of where functionalists differ from behaviorists. There also, it should be noted, are at least three brands of behaviorism which may differ significantly from one another — as in Ryle’s behaviorism vs. Skinner’s. The latter’s is an actual program within psychology, while the former is an account of the logic and semantics of mentalistic ascriptions.

    We also happened to do a lengthy dialogue on many if not most of these topics, for those who haven’t seen it. For you, DM, it might serve as a good refresher of the relevant issues and problems, including on the question of substrate.

    http://meaningoflife.tv/videos/31874

    Liked by 1 person

  24. Maybe organic chemistry is the equivalent to continental philosophy, with analytic philosophy being basic chemistry?

    Like

  25. Robin: If you’ve read Searle’s exchange with Fodor and others on the Chinese Room, you’d know that nothing in the argument depends on there being a man in the room or there even being a room. Searle has given variations on the thought experiment which dispense with both.

    Searle’s argument simply relies on a non-standard realization in order to highlight certain relevant aspect of computation that make it a poor model for thinking. Nothing about the argument demands the example.

    It is worth noting that the problem of intentionality is just one of the many problems functionalism is saddled with. It may not even be the worst one. The two that I think are the most damning — the fact that qualia completely resist any functionalist analysis and the so-called “liberalism/chauvinism” problem (i.e. that either too many things are going to count as minds or too few) — are even stronger than Searle’s argument.

    Most of the significant problems with functionalism are nicely summarized in Ned Block’s essay, “Troubles with Functionalism.” http://mcps.umn.edu/philosophy/9_12Block.pdf

    Like

  26. Nah, not a good analogy. Though analytic would seem structural while continental seems like an effect to grasp a human context…..

    Like

  27. Hi Dan,

    A complex machine table of the sort that functionalists invoke is not a naive mapping of inputs to outputs in one step, which is what I was talking about.

    Like

  28. That said, per Block, per the philosophers he mentions by name in different locations, per Dan, and per Robin, it’s not always clear when a philosopher is talking about what Block calls functionalism as a philosophy and functionalism as a psychology. One’s interpretation of what problems a philosopher allegedly faces will vary with one’s understanding and interpretation of this vis a vis said philosopher.

    Liked by 1 person

  29. I was wishing Alan White was willing to briefly comment on neutral monism, given his interests in Whitehead. The colour analogy goes back to Mach [as cited by Russell 1914, who comments James has “essentially” the same views]:

    “A color is a physical object so long as we consider its dependence upon its luminous source, upon other colors, upon heat, upon space, and so forth. Regarding, however, its dependence upon the retina. .. ., it becomes a psychological object, a sensation. Not the subject, but the direction of our investigation, is different in the two domains.”

    I am not particularly sympathetic to the direction he then goes (out where some positivists ended up):

    “…sensations (complexes of elements) make up bodies. If to the physicist bodies appear the real abiding existences, while sensations are regarded merely as their evanescent transitory show, the physicist forgets, in the assumption of such a view, that all bodies are but thought-symbols of such a view, that all bodies are but thought-symbol for complexes of sensations (complexes of elements)”.

    As to octopodes, we as much evidence they are conscious as we have that dogs are – broad analogies between the relationships between our mental lives and the complexity of behaviour we experience as arising from this, and how this coheres with the neuroscience.

    As to the C-e R-m and related arguments – what more is there to say? The real arguments against these come from neuroscience and computer science, not philosophers. For example, how much computational work would have to be done to create a look-up table capable of suitably answering the kinds of queries DanK would ask in a Turing Test.

    Like

%d bloggers like this: