Food for thought

readingsOur regular suggested Friday diet of readings for the weekend:

A two-part essay (I and II) by Mike Labossiere over at talkingphilosophy.com explores the concept of artificial mind, using the recent movie Ex Machina as a platform. The real question, according to Labossiere, is not whether AI will be able to pass the Turing test (that’s easy!), but whether we will be able to design an effective Voight-Kampff test. Our long term survival may depend on it…

“Neuro-aesthetics” is all the rage these days, but here is an article that argues that art may actually reveal the limits of neuroscience itself.

Is consciousness an emergent, systemic property of brain functioning? Research on anesthesia and its effects on consciousness may lead us to the answer.

The idea of cosmopolitanism is old. The term was coined by the Cynics and developed by the Stoics. Nigel Warburton discusses its roots and its urgency in the modern world.

Noted philosopher Harry Frankfurt (the guy who published a delightful essay On Bullshit) argues that economic inequality is not inherently immoral, and that treating it as such actually hinders our ability to tackle the real problem.

Advertisements


Categories: Plato's Suggestions

13 replies

  1. The essay on Ex Machina misses a key point from the movie. The technology involved in making Ava is wetware* (a point stressed in the movie, as I remember seeing it). This involves the “new synthesis” of AI+SB (synthetic biology), so Ava is biocomputing, not conventional computing.

    * http://en.wikipedia.org/wiki/Wetware_computer

    Like

  2. Philip, I’m not sure why the specific technology invoked by the movie makes a difference to Lebossiere’s argument. Could you expand?

    Like

  3. I see that Michael LaBossiere has posted a part III* to his essay. I think that testing for emotion and consciousness in synthetic bots could end up being similar to that for humans, involving detecting distinct chemical changes as well as changes in patterns of “information” processing (as in an MRI). So the particular material substrate of the bot will matter. (Unless you can argue that there can be beings like us with the exact same material makeup of their neurons and other cells but have no consciousness or emotion.)

    * http://blog.talkingphilosophy.com/?p=8854

    *

    Like

  4. Philip, as you probably recall, I certainly think that the substrate matters in order to originate phenomenal consciousness. But the AI hypothesized in the movie is a hybrid, so there is no way to make predictions a priori about whether she would / would not have phenomenal experiences. Which means the point of the article remains unaffected, I think.

    Like

  5. Former CFI head honcho Ron Lindsay made a similar claim about income inequality. That was just one of many things about which he was wrong.

    Like

  6. My point (which I think is missing in essays about AI like Michael LaBossiere’s) is that, simply put, biocomputing is more than (conventional) computing. And it isn’t exactly clear what the hybrid (biomaterial, conventional material) mix is in Ava. At least I didn’t didn’t find that analysis in the essay. But it’s the really critical analysis to make in whether Ava has emotions.

    Like

  7. I did find Mike Labossiere’s Ex Machina article reasonable enough given modern understandings, though once we have better understandings, I do suspect that many of these fears and uncertainties will dissipate. Some day when you look consciousness up on Wikipedia, I believe that there will be a scientifically accepted definition, not just a bunch of hum-d-dums and this and thats. Furthermore I do believe that my own such model will stand strong as a worthy candidate.

    More to the point however, it may be observed that evolution seems to have found it effective to build all advanced forms of life, to be conscious. This does imply that our pathetic machines will never be able to autonomously function as well as… perhaps even ants! (Unless of course we’re able to make them conscious, which I doubt.) Thus those “Terminator” scenarios might indeed just be Hollywood propaganda created… well, for the entertainment that we actually pay good money.

    As for the aesthetics piece, yes I do believe that art helps demonstrate where modern neuroscience fails. I’d be far more impressed however, by a paper which effectively demonstrated an area in which the field does not fail.

    As far as anesthesia goes, in my own consciousness model I place this under an original classification. This is not “subconsciousness,” which I leave standard, but rather “sub-consciousness,” spoken with a slight pause mid way. Alcohol, sleep, drugs, hypnosis, and so on, effectively impair conscious function. Furthermore consciousness may be eliminated entirely through such mechanisms, leaving the non-conscious mind to stand alone. From here our “dreams” can simply be interpreted as the conscious mind functioning as well as it’s able, during the impairment of sleep.

    As for the moral piece, I believe that I said enough about this last time with my empathy discussion. Today morality does seem to effectively crowd out my own “instrumental” speculation, though I shouldn’t throw stones.

    As for the “inequality is not immoral” piece, I actually wrote the following before reading it:

    You might suspect that I’d agree. I don’t however given that I’m quite a stickler about definition. Capitalism works given that it addresses our naturally selfish nature. Nevertheless it mustn’t be let free — government must step in to tame it given that the powerful will naturally abuse the weak for their own interests. What I believe that Harry Frankfurt wanted to get at, but failed to, perhaps given the preponderance of moral ethics rather than instrumental ethics, is that socialism doesn’t work given that we are naturally selfish.

    After actually reading the article however, I must instead observe that it was simply a load of crap! This man wants us to believe that we are not jealous social creatures which thus feel good or bad based upon our perceptions of what others have. I’ll stand by my position above, or what I think should have been his position all along.

    Like

  8. On whether our “pathetic machines” can ever be conscious, if this includes “bio-machines”(made via synthetic biology), then what I have called “The biocomputing phenomenology thesis” could apply:
    http://codicalist.wordpress.com/2015/09/13/is-biocomputing-computing/

    Like

  9. The idea of cosmopolitanism is old. The term was coined by the Cynics and developed by the Stoics. Nigel Warburton discusses its roots and its urgency in the modern world.

    When I read this article I was both heartened and dismayed.

    I was heartened by the fact that the atheist world was turning its attention to this, the most central and most existential of human problems. Christianity has always grappled with this problem, with only some success. We need all belief systems, religious or non-religious, to agree on the primacy of this problem. We need them to collaborate and support each other’s work. The fact of this agreement would sensitise humanity and guide humanity to address this problem.

    On the other hand I was dismayed by the shallowness of this article. It gave a good introduction but left me hungering for a deeper treatment because its diagnosis was shallow and its advice was shallow. But even so, the mere fact that philosophers are talking about the subject is an important step in the right direction. So, though I expected more, I am glad for what it gave.

    Like

  10. On the anesthesia article, I wish the authors had related it to people who are at least somewhat insensitive to some of the effects of general anesthesia, at least when administered at standard doses, or administered inadequately on people for whom a standard does would work, and who those retain some awareness. The fact that the awareness generally seems partial, not total, seems to undercut the idea of an on-off switch.

    https://en.wikipedia.org/wiki/Anesthesia_awareness

    Like

  11. On Warburton, I don’t find him shallow, while noting that he’s in the normal word length of an Aeon article.

    I do find him naive, even engaging in wishful thinking, especially re the Internet and most especially re social media. The risks that Cass Sunstein mentioned (Massimo, healthy sarcasm alert: I’m agreeing with a known major neoliberal thinker!) are indeed very much “out there” today. American politics of this moment shows that the Net has arguably *increased* tribalism in some ways.

    Beyond that, as the Iran protests of a year or two ago showed, repressive governments will let people Tweet away … while spying on them. This is nothing new. Before the downfall of the empires, the more repressive governments of Europe kept open the coffeeshops, and encouraged them to have subscriptions to every conceivable newspaper, to make it easier to spy on late 19th century dissidents.

    In short, his “in principle” isn’t “in reality” right now, and it may well not be so for the conceivable future.

    Like

  12. Sorry about the “pathetic machines” crack Philip. By this I simply meant that the creations of evolution, as well as the dynamics of nature in general, seem many orders beyond what we figure out how to make. Furthermore I personally just use our technology rather than build it, and thus naturally shouldn’t have as much appreciation as you.

    Yes Socratic, I also think we should get rid of the on/off switch analogy regarding anesthesia — let’s trade it for a dimmer switch. Thus I say that a person who is sleeping is neither conscious nor non-conscious, but rather “sub-conscious,” and to the magnitude of conscious degradation. The same applies for the effects of alcohol and so on — in practice we seem to “dim” our conscious experiences in a variety of ways. While some of them do seem quite enjoyable, “sleep” seems to be most necessary for our health. (As mentioned above, I do leave the standard “subconscious” term unaltered.)

    Liked by 1 person

  13. Pathetic machines are surely those machines that feel!

    There is a nice review by William Seeley (a philosopher and sculptor) of neuroaesthetics from back in 2007 where he sees it as a natural approach to the “phenomenal character of aesthetic experience”; I would probably add some of the emotional character too. His central thesis is pretty unobjectionable to me: “(CH1) Perception is an active process (CH2) Art exploits the properties of this process (CH3) An understanding of the way artworks exploit these processes plays a role in explanations of art and aesthetic experience”. As to the semantics of art arising from its culture, and how they interact with perceptual processes (a trivial example would be jolly music with gloomy lyrics), ISTM that the neurobiology might have something interesting to say too.

    Like

%d bloggers like this: