The problems with strong Artificial Intelligence

artificial intelligenceOccasionally I have a video conversation with my colleague Dan Kaufman, who is a professor of philosophy at Missouri State University and a graduate of the City University of New York (where I teach, though he wasn’t one of my students!). You may want to check out his writings, he blogs over at Apophenia.

Anyway, the latest such conversation, archived at my YouTube channel (but you can watch it here, video below) is about ideas related to strong Artificial Intelligence and the philosophy of consciousness.

We chatted for about an hour and twenty minutes (I know, I know), beginning with an introduction to the basic strong AI thesis about the possibility of producing machines that think in a way similar to that of human beings. We then debated the nature and usefulness (or lack thereof?) of the Turing test and asked ourselves if our brains may be swapped for their silicon equivalents, and whether we would survive the procedure (we agreed that we wouldn’t be willing to try…).

I then explained why I think that “mind uploading” is a scifi chimera (see also this more technical paper I wrote in response to David Chalmers), rather than a real scientific possibility, after which  we dig into the (in)famous “Chinese Room” thought experiment proposed decades ago by John Searle (see some more thoughts of mine about that here and here), and still highly controversial. Finally, Dan concludes by explaining why, in his view, AI will not solve problems in philosophy of mind. Enjoy!

 

Advertisements

3 thoughts on “The problems with strong Artificial Intelligence

  1. I myself have always been very anti Strong AI. Very good points made on all sides. As far as the turing test, I always appeal to the idea that since we were not designed to pass a turing test, a computer which was designed to pass it would not have more in common with our mental lives than any complex machine. Good discussion!

    Liked by 1 person

Comments are closed.