Apparently, computers can not only think but fool a person into thinking that they are human as well. The Loebner Prize for artificial intelligence ( AI ) is awarded every year to research uncovering this vivacious and alluring field. The contestants are awarded based on research undertaken to develop the most human-like computer. As such, the Loebner Prize is essentially a formal instantiation of a test named after Alan Turing, the great British mathematician. Turing tackled the problem of artificial intelligence by proposing an experiment famously known as the Turing test. The test is principally an attempt to elucidate possible standards or what may be called demarcational necessities for a machine to be categorized as "intelligent". According to the Turing test, a computer could be said to "think" if it could fool an interrogator into thinking that the conversation was with a human. The prospect of these “thinking machines” seem to defy thought and imagination to the extent that Professor Marvin Minsky (MIT) seems quite convinced that the next generation of computers will become so intelligent that “we’ll be lucky if they are willing to keep us around the house as household pets.” If that doesn't make you cringe, buy a leash, tie it around your neck and practice fetching the mail from the mailbox.
At one extreme, there are theoreticians who profess that thinking is essentially information processing reducible to computations based on symbol manipulation, in line with Turing’s basic outlook. A more moderate position would allow that thinking is infinitely complex and therefore incapacitates the possibility of complete analysis; however, this position would maintain the basic conviction that minds and computers are essentially of the same kind since the former is nothing more than an optimization of the latter. The discontinuity or chasm between the mind and the computer was developed, they would argue, to maintain the dignity and value of a human being; in reality, no such distance exists beyond what is erroneously sustained by our misguided perceptions. However, John Searle, a professor of philosophy at UC Berkeley has argued forcefully in his book, Minds, Brains and Science, that the chasm is unbridgeable in principle. His famous illustration often referred to as “the Chinese room argument” provides quite a convincing case against equating the mind and the computer. His carefully thought out scenario shows that observationally equivalent phenomenon might actually have contrary causal explanations. I happen to agree with Searle. Our unwarranted enthusiasm with technological innovations engenders certain uncritical dispositions that overlook the discontinuity that is implicit when comparing minds to computers; the discontinuity has nothing to with the progress we’ve achieved. It really does not matter how digital we become, how rapidly complex calculations are done or even if they can be given a purely algorithmic delineation. It is not a matter of progressive possibility but one of principled impossibility. All that computers or digital machines have ever achieved or will ever achieve can be exclusively placed within a syntactic category. The mind, though syntactic and computational in a certain sense, will always transcend any reductionistic tendencies due to its intrinsic semantic and intentional nature. As such, the greatest creations of our genius (computers/digital machines), can never duplicate but at best simulate the mind.
So……can a computer really think? Well, that depends. If consciousness can be reduced to syntax and a person to nothing more than a machine, then I guess we could possibly conclude that computers do in fact think. Alas, if everything is computational and essentially algorithmic, will it really matter if they do?
Finney Premkumar
(Published in The Clause, Copyright 2011)
Finney Premkumar
(Published in The Clause, Copyright 2011)