When is AI really I?

By on February 9th, 2018 in Articles, Human Impacts, Robotics, Social Implications of Technology

Apparently AI software can now match or beat humans at reading comprehension.  Folks at Stanford developed a reading test, and AI systems at Microsoft and Alibaba are able to pass the test, doing so as well as might be expected by humans. This development gave rise to another wave of the question “is this really AI?” Or when is AI really “I,” i.e., intelligent?

Turing of course tried to establish a metric for this with his blind test of communications with a human or AI behind the curtain as it were. And Daniel Dennett suggests that motivation/purpose may be an essential aspect of AI to develop intelligence as we think of it. IEEE has produced a second iteration of their overview of ethics issues to be addressed in AI development (comments due by March 12, 2018).

I have this vision of the argument among my ancestral Neanderthals. (My results from 23 and Me say I’m 4% Neanderthal; others think it is more.) A tribal council viewing the recent Cro-Magnon immigrants to Europe is asking similar questions.  Clearly the newcomers were not as robust and lack physical strength. They have smaller brains, and are just plain ugly. I don’t think they would have tried chess, Go, Jeopardy,  or reading tests to determine intelligence, and of course they pre-dated Alan Turing.

New Zealand has decided that beside big mammals such as great apes, whales, and dolphins, that octopuses also  warrant protection as sapient species. Certainly all of these animals, when carefully observed over time, give rise to questions about language, social behaviors, tool use, and other traits we might have considered uniquely human.

I suppose we will continue to ask this question with every advance or breakthrough in AI until we reach what I will call the Brunner point. In his novel, Stand on Zanzibar, John Brunner poses the question to the “Watson/Siri/Alexa/Google” equivalent in that book — “are you conscious?” The response was “whatever I answer you will not be able to determine if my answer is correct.” Or to put it another way, when the AI’s start discussing if we are actually sapient, then we will have passed a tipping point.

This debate is really a rat hole, a  waste of time. At any point in time we will have multiple variations of “smart” things. Arguing if my car is smarter than my phone or the Santa Claus mic-camera-speaker in my house (“it knows if you are sleeping, it knows if your awake, it knows if you’ve been bad or good …”) is a waste of good computing cycles. We cannot agree what human intelligences are.

And this is the key. Just as humans, individually, have different competencies, so we can expect machines to do the same. I am not as chess-intelligent as a grand master, or as music intelligent as Mozart, or as visually intelligent as Michelangelo, but I’d like to think I qualify by most of the daily Turing tests we apply to our encounters as “intelligent.”

I have an expectation that machine consciousness will emerge unexpected, unsought, and perhaps undetected. We won’t know it when it happens, and will be in denial about it for some time. And while we debate, sufficiently smart AIs will be beating us in the stock market, manipulating our elections and purchasing, finding cures for diseases in sapient primates, and managing our breeding through online dating systems. That is when we might get an answer to the “is AI really I” question.