Saturday, March 29, 2008

A neurologist finds nirvana 3

Another possible impact: strong AI. I've long believed such a thing was a non-starter, that machines do not think and cannot think. By contrast, the weak AI position, a mere simulation of thinking is being / will be achieved.

A primary point is true understanding and Searle's "Chinese room" analogy illustrates this beautifully. He imagined himself in an otherwise sealed room with two slots with which to communicate with the outside world. Through one cards would be fed in on which were written questions in Chinese, a language he didn't understand. Using a huge manual which contained all the permissable combinations of pictograms, he'd look up the answer, write it out and pass it through the out slot. At no point would he ever understand the meaning of the questions. This is precisely the mechanistic way in which computers work.

It strikes me in light of Dr Taylor's video, the problem lies with computers only simulating left-brain functions. Interestingly, where subjects such as pattern recognition, a right-brain function are attempted, serial mode computers (the ones in everyday use) work poorly. Much more effective are "neural nets", devices with a parallel architecture, much closer to the brain's.
Even thornier than understanding is the question of consciousness, a word which doesn't even have an agreed scientific definition. A common viewpoint is to claim it's the same as self-awareness. So could this be solved by linking up an otherwise intelligent computer to a video camera pointing at a mirror? Obviously not, though it's far more difficult to say precisely why not.

Again I think it comes down to simulating left side functionality only; but more interestingly, when Dr Taylor's left brain stopped working, she experienced awareness of her own body in a way we're not used to. Even more, it seems as though she experienced a non-standard sensory input from the rest of the world too.

Could this be the missing element of consciousness? A sense we're normally unaware of, normally filtered out of our everyday expience but somehow forming a base level of consciousness?

One interesting pro-strong AI analogy was developed by Jaron Lanier. Imagine a person having a neuron within their brain replaced by an equivalent electronic circuit. And then another, and another until their whole brain was replaced. Is there some point at which they would stop being conscious? Its a powerful argument, but I think from the foregoing we can assume there is something going on on the right side which wouldn't work electronically, so yes, at some point we would see breakdown / loss of functionality.

0 Comments:

Post a Comment

<< Home