Monday, January 15, 2018

Yet More Incoherent Thinking about AI


I've written before about how sloppy and incoherent a lot of popular writing about artificial intelligence is, for example here and here -- even by people who should know better.

Here's yet another example, a a letter to the editor published in CACM (Communications of the ACM).

The author, a certain Arthur Gardner, claims "my iPhone seemed to understand what I was saying, but it was illusory". But nowhere does Mr. Gardner explain why it was "illusory", nor how he came to believe Siri did not really "understand", nor even what his criteria for "understanding" are.

He goes on to claim that "The code is clever, that is, cleverly designed, but just code." I am not really sure how a computer program can be something other than what it is, namely "code" (jargon for "a program"), or even why Mr. Gardner thinks this is a criticism of something.

Mr. Gardner states "Neither the chess program nor Siri has awareness or understanding". But, lacking rigorous definitions of "awareness" or "understanding", how can Mr. Gardner (or anyone else) make such claims with authority? I would say, for example, that Siri does exhibit rudimentary "awareness" because it responds to its environment. When I call its name, it responds. As for "understanding", again I say that Siri exhibits rudimentary "understanding" because it responds appropriately to many of my utterances. If I say, "Siri, set alarm for 12:30" it understands me and does what I ask. What other meanings of "awareness" and "understanding" does Mr. Gardner appeal to?

Mr. Gardner claims "what we are doing --- reading these words, asking maybe, "Hmmm, what is intelligence?" is something no machine can do." But why? It's easy to write a program that will do exactly that: read words and type out "Hmmm, what is intelligence?" So what, specifically, is the distinction Mr. Gardner is appealing to?

He then says, "That which actually knows, cares, and chooses is the spirit, something every human being has. It is what distinguishes us from animals and from computers." First, there's the usual "actually" dodge. It never matters to the AI skeptic how smart a computer is, it is still never "actually" thinking. Of course, what "actual" thinking is, no one can ever tell me. Then there's the appeal to the "spirit", a nebulous, incoherent thingy that no one has ever shown to exist. And finally, there's the absurd claim that whatever a "spirit" is, it's lacking in animals. How does Mr. Gardner know that for certain? Has he ever observed any primates other than humans? They exhibit, as we can read in books like Chimpanzee Politics, many of the same kinds of "aware" and "intelligent" behaviors that humans indulge in.

This is just more completely incoherent drivel about artificial intelligence, no doubt driven by religion and the need to feel special. Why anyone thought this was worth publishing is beyond me.

12 comments:

phhht said...

Thanks for the post.

As usual, I agree with you, but I cannot express myself as clearly as you do.

Peter (Oz) Jones said...

First, Happy New Year Prof Shallit
For a while I have been concerned, as expressed in this song, Hoots Mon:
"There's a moose loose aboot this hoose"

Great to be reading your thoughts on science once more.
Best regards

Gingerbaker said...

I think most of us are concerned about the Cylon scenario, ie, suddenly the machines start having consistent behaviors outside of their programming. That they would have desires, intentions like people do. That they would become dangerous.

Is there any indication that this has happened or is likely to happen, or is even possible? The slightest hint of a problem?

Is it not the flesh only which has such fallibility? For AI to be dangerous, it would seem to need to be not only self-aware, but to have had developed goals, an agenda, a value system. How could that be possible? Why would it happen?

MNb said...

According to this Flemish Newspaper AI beats humans at comprehensive reading:

https://www.demorgen.be/technologie/het-is-zover-machines-kunnen-nu-officieel-beter-begrijpend-lezen-dan-mensen-bf288dec/

Jeffrey Shallit said...

Is there any indication that this has happened or is likely to happen, or is even possible?

I honestly don't know what "desires" and "intentions" are -- that is, I know of no simple definition of them, nor any way I could test if a machine or a person actually has such things. Any suggestions?

If a robot is programmed so that, when its batteries get low, it finds an electrical outlet and plugs itself in, could we reasonably say it has a "desire" to feed on electricity? Or the "intention" to do so? It seems reasonable to me.

It's certainly possible to have unpredictable behavior -- for example, if a machine has access to a source of truly random bits, like those arising from radioactive decay.

Gerry Myerson said...

Gingerbaker, perhaps https://www.barrons.com/articles/the-crash-of-1987-1507954684 will convince you that AI has been dangerous for at least the last 30 years, with or without self-awareness, goals, agendas, and value systems. All it takes is code sufficiently complex that the coders can't foresee all the consequences.

Jeffrey Shallit said...

I think Gerry has a good and important point there. The more complex a program becomes, the harder it becomes to predict. As evidence I point to the busy beaver problem, where almost trivial programs have ridiculously complex behaviors.

philosopher-animal said...

For me, what is depressing is not just that such was published, but that there's *nothing new* in what was said. For just once I'd like to see an anti-AI argument that isn't one that looks like it was written at least 30 years ago.

cody said...

“It is what distinguishes us from animals and from computers” reminded me of this gem from the first episode of Futurama:

Fry: Who cares what you're programmed for. If someone programmed you to jump off a bridge would you do it?
Bender: I'll have to check my program ... yep.
Leela: [from outside] Open up!
Fry: C'mon, Bender! It's up to you to make your own decisions in life. That's what separates people and robots from animals ... and animal robots.

As for the need to feel special, months ago I started getting ads on facebook for “the principle,” which turned out to be a documentary arguing that the geocentric model was never really properly refuted. I argued with the person running the page briefly and learned they were pushing the Tychonic system (with the Sun & Moon orbiting Earth, and the rest of the planets orbiting the Sun), and that they were using Mach's principle and relativity to argue that Earth was just as sensible a choice for the center of the universe as anywhere else, before they blocked me.

But I could still look at their page, so I looked through their pictures, and they had all these inspirational quote type pictures with the hashtag “iamsignificant,” which to me really drove home the point that it was all about the delusional of being the center of the universe. Apparently the incredible achievements of humankind aren't enough for some people.

Lee Witt said...

Re : the ACM viewpoint. This comment is made there:

A program that can play winning chess or Go is not one.

Didn't AlphaGo essentially teach itself to play at a level high enough to beat the best human players? That would seem to be a level of learning ability -- there's enough there, in my opinion, to be a serious response to the people you've linked to -- even with the goal of AlphaGo being as focused as it is.

JimV said...

"Is it not the flesh only which has such fallibility?"

I think the concern is that flesh's fallibility is apt to make its way into computer algorithms, in particular the fallibility that says that as long as I am making big bucks I don't care what my algorithms may be doing to civilization. I tend to think that is already happening.

As for computers becoming self-aware and deciding to be sociopaths, that premise tends to make for bad movies, I agree, but in principle (the principle that we are in fact nano-tech machines with a mixture of innate and learned algorithms ourselves) it could happen.

Kerry Liles said...

I couldn't find anyplace to leave a comment that was not specifically related to an existing post, so here goes:

How was Knuth's 80th birthday celebration? I understood you were scheduled to go to Sweden for the celebration, but the facebook group didn't seem to provide a lot of detail - just some very interesting pictures etc. If you were able to go, I (perhaps others?) would be interested in your impressions etc.

regards,
Kerry Liles