Christof Koch in an interview with MIT’s Technology Review suggests that computer consciousness is a matter of complexity, and perhaps the way that complexity is implemented.
With the recently released movie on Alan Turing (The Imitation Game) , the public is, once again, exposed to the basic concept … and Turing’s insight that “if it interacts like an intelligent, conscious being, then maybe it is one.” A more ironic concept since the movie pushes Turing a bit further on the autistic scale than is likely — and causes the attentive audience to ask if Turing is conscious (he clearly is intelligent.)
This concept is often confused with the question of “what makes us human” or “how do we know that other entity is human?” … which is not “is conscious?” or “is intelligent”. A WSJ column “Why Digital Gurus Get Lost in the ‘Uncanny Valley‘” touches on this, pointing out that we use a number of unconscious clues to make this decision. (Also why Pixel hires folks with acting backgrounds.)
There is a danger here. If we judge these characteristics by certain clues — like the angle of a dog’s head, the big eyes, (eyes are significant here) .. and so forth, we may dismiss intelligent/conscious entities who fail our (unconscious?) tests. Of course they may fail to recognize us as having these characteristics for parallel reasons.
The good news is that our current primary path for detecting intelligent life is with SETI, and since all of those communications are very “Imitation Game” like, we won’t have the chance to mess it up with our “Uncanny Valley” presumptions.
Also see the AI Apocalypse