Commentary on Turing's `` Computing Machinery and Intelligence'' 1

 

Turing's aim was to refute claims that aspects of human intelligence were in some mysterious way superior to the artificial intelligence that Turing machines might be programmed to manifest. He sought to do this by proposing a conversational test to distinguish human from artificial intelligence, a test which, he claimed, would by the end of the twentieth century fail to work. And, it must be admitted, it often does fail---but not because machines are so intelligent, but because humans, many of them at least, are so wooden. The underlying question is about the limits of ``algorithmic intelligence'', whether all reasoning is in accordance with some rule or other---whether, that is, to be reasonable is to be acting in accordance with a rule---or whether some exercises of reason go beyond anything covered by antecedent rules. But whether or not this is so, there are many people, bureaucrats, legal clerks, accountants, who are entirely rule-governed, and take care never to do or say anything unless it is in accordance with the rule-book. Turing's Test would classify them with the artificial algorithmic intelligences, not because they were artificial, but because their responses were mechanical.

It is a distinction we are familiar with in ordinary social life. Often we find ourselves having entirely predictable conversations with people who invariably say the correct thing and utter conventional opinions and manifest standard responses; but occasionally we meet someone who has interesting ideas and says things which we had not thought of but which we immediately recognise as right and fitting. Turing parries this variant of Lady Lovelace's objection [p.450.](p.21.){p.56.} by suggesting that ``There is nothing new under the sun'', and that all his thoughts are really unoriginal. But the objection lacks force, as Turing himself admits: ``I do not expect this reply to silence my critic. He will say that . <they> . . are due to some creative mental act . .''[p.451.](pp.21-22.){p.57.} But the crucial point is that we do make the distinction, whether or not we sometimes misapply it. We distinguish conversation with Turing's critic, who has a mind of his own, and, when we introduce a topic, can ``go on'', making fresh apposite points, from conversation with someone who produces only programmed responses with nothing individual or original about them. We have the concept of non-algorithmic intelligence.

Turing says that the argument from creativity leads back to the argument from consciousness, which he considered closed, since those who support it are committed, whether they realise it or not, to solipsism. It was a point easily made in 1950 against the background of the then dominant Verificationist theory of meaning. But meaning is not constituted by the method of verification. Many understand Fermat's Last Theorem, though few can fathom Andrew Wiles' proof. The tests of whether a person is conscious are one thing, what it means to say that a person is conscious is another. Meaning is a matter not of tests, but of entailment patterns, of what follows from the ascription, or is inconsistent with it. It would be inconsistent of me to say that you were in great pain, and go on to assert that you were as happy as happy can be; rather, I should show sympathy, and not expect you to be able to think hard about peripheral matters. The nightmarish case of a person paralysed by curare, yet conscious while an operation is performed under an ineffective anaesthetic shows how different the concept of consciousness is from the criteria for its ascription. It is characteristic of consciousness and mental concepts generally that though we often have good grounds for ascribing them, our ascriptions are subject to subsequent withdrawal. It is the same with truth. We often have good grounds for holding that something is true, and quite often are right in doing so, but, apart from some empty tautologies, live with the perpetual possibility of being wrong. This shows that Turing's Test is much less definitive than he thought. Its logic is not the simple, clear logic of deductive argument, but the messier ``dialectical'' logic of prima facie arguments and counter-arguments, of objections and rebuttals, inconclusive arguments, and conclusions subject to `other things being equal' clauses, and the possibility of our having later to emend them. It does not follow that Turing's Test is not good, but it does follow that its application is more difficult, and may involve wider considerations than a simple exchange of conversational gambits.

One feature of consciousness is that a conscious being can be the subject of its own thought. Turing complains that no evidence was offered for this claim, but it seems true, and I think that it opens the door, when we come to think about our own rationality, to certain sorts of reflexive thought and self-referring argument of great importance.

Turing is dismissive of the ``Heads in the Sand'' objection, when the consequences of mechanism are considered and found to be too dreadful. But although we have to be prepared to discover that things are as they are and their consequences will be what they will be, there are good reasons for being chary of throwing over established modes of thought too easily. They may have much going for them, and often have been tried over many generations, and found to be reliable. In particular we should be chary of throwing over the idea of rationality itself. If some theory has as a consequence that we cannot trust our intimations of rationality, then we may well be sceptical of the reasoning that leads us to adopt that theory. It is a very general test of a metaphysical system: what account does it give of itself? Does it cut the ground from underneath the considerations that might incline us to accept it? On an autobiographical note it was considerations of this sort that first led me to think about the self-referential paradoxes of reductive accounts of human reasoning, and ultimately to Gödel's theorem as encapsulating the principle of self-reference in a rigorous way.

Turing allows that there are limitations to algorithmic intelligence, but resists the conclusion that human intelligence is therefore superior. Although Gödel and Turing proved their own theorems, each using principles of inference that went beyond those laid down for the system they were studying, it might be that each was actually an instantiation of some stronger system of algorithmic reasoning. After all, once some original move has been recognised as a right one, it becomes possible to encapsulate it in some definitely formulated rule. It has often happened in the history of the creative arts. Novelty in music, in painting, in literature, is first recognised as original, then generally accepted and copied, and then systematized and standardised, and finally becomes vieux jeu. So seeming novelty in human intelligence might be algorithmic in some wider system after all; and, even if not already algorithmic, there would be some machine that could be built incorporating the apparently novel move. So ``our superiority can only be felt on such an occasion in relation to the one machine over which we have secured our petty triumph. There can be no question of triumphing simultaneously over all machines. In short, then, there might be men cleverer than any given machine, but then there might be other machines cleverer again, and so on.''[p.445.](p.16.){p.52.}

These objections were ones I found it difficult to overcome when I was thinking out my ``Minds, Machines and Gödel'' 2 I overcame the first by considering the purported mechanical model of the human's own mind; and I neutralised the second by following the `and so on' up into the transfinite. Douglas Hofstadter 3 is not sure whether the foray into the transfinite secures or refutes my argument, and opines that it refutes it because of the Church-Kleene theorem that ``There is no recursively related notation system which gives a name to every constructive ordinal'', which means in the case of Turing's contest between an algorithmic machine and a human mind ``that no algorithmic method can tell how to apply the method of Gödel to all possible kinds of formal system''. But the absence of such an algorithmic method is crippling only to an algorithmic intelligence. Only if the human mind were an algorithmic intelligence, would it be unable to keep up the pressure as the contest ascended through ever higher transfinite ordinals. If the mind can understand Gödel's theorem, as it seems it can, then it will be able to apply it in novel circumstances not covered by any rule-book, and so out-gun an algorithmic machine, however ordinally complex its Gödelizing operator is.

 

 

 

 


To return from footnote to text, click on footnote number

 

1. `` Computing Machinery and Intelligence''. First published in Mind, 49, 1950; page references to this version are in square brackets thus [p.445.]; reprinted in Alan Ross Anderson, Minds and Machines, Englewood Cliffs, N.J., 1964, pp.4-30; page references to this version are in round brackets thus (p.16.); also in The Philosophy of Artificial Intelligence, ed. Margaret Boden, Oxford University Press, 1990. page references to this version are in round brackets thus {p.52.} Also published under the title ``Can a Machine Think?'', in volume 4 of The World of Mathematics, ed. James R. Newman, Simon & Schuster, 1956, pp 2099-2123, which has now been reprinted by Dover in their 2000 edition. It is partially reprinted in Douglas R. Hofstadter and Daniel C. Dennett, The Mind's I, Basic Books, 1981.
2. `` Minds, Machines and Gödel'', first published in Philosophy, XXXVI, 1961, pp.112-127; reprinted in The Modeling of Mind, Kenneth M.Sayre and Frederick J.Crosson, eds., Notre Dame Press, 1963, pp.269-270; and Minds and Machines, ed. Alan Ross Anderson, Prentice-Hall, 1954, pp.43-59.
3. Douglas Hofstadter, Gödel, Escher, Bach, New York, 1979, pp.475-476.
Return to bibliography

Return to home page