Dear Lucas

I published a paper (about truth and deflationism) last year in Mind, which briefly mentioned the Lucas argument. It is:

Ketland, Jeffrey 1999: "Deflationism and Tarski's Paradise", MIND 108 (January): 69-94.

It turns out that if you add Tarski's theory of truth to Peano arithmetic (PA) then G is a theorem of the new system. Indeed, the statement "PA is true" is also a theorem of the new system. So is Con(PA). In order for this to happen, this requires that adding Tarski's theory of truth to a formal system F should generate a *non-conservative* extension.

I also show that adding all the "disquotational" T-sentences, " 'A' is true iff A" always yields a conservative extension. Such theories of truth are called "minimalist".

I define a deflationary theory of truth to be one which generates conservative extensions. I define an adequate theory of truth to be one which allows you to prove "T is true" from T. It follows that only a non-conservative theory of truth could be adequate. Tarski's theory of truth is thus adequate but not-deflationary. Minimalist theories of truth are deflationary (i.e., conservative) but inadequate.

It follows that one can (deductively) reason to the truth of G by assuming PA, and also assuming a sufficiently rich truth theory (i.e., Tarski's).

Regards - Jeff Ketland

Dr Jeffrey Ketland

Department of Philosophy C15 Trent Building

University of Nottingham NG7 2RD

Tel: 0115 951 5843

E-mail: Jeffrey.Ketland@nottingham.ac.uk

Department of Philosophy C15 Trent Building

University of Nottingham NG7 2RD

Tel: 0115 951 5843

E-mail: Jeffrey.Ketland@nottingham.ac.uk

Like you and Professor Penrose, I think that formal algorithmic systems will turn out *not* to represent the human mind.
But I do not think that the argument, as presented thus far in 1961 (and 1989/1994 by Penrose), provides support for the crucial premise:

(1) For any consistent (true) formal system F, the mind can recognize the consistency (truth) of F

Of course, *if* (1) is true, then the mind can always "go one better" than any such F.
Having recognized the consistency of F, there is a simple deductive proof of G (and thus of G's truth).
But Gödel's 1st Incompleteness Theorem itself doesn't imply (1).
Neither does reflection on Gouml;del's theorem.

Indeed, (1) is a strong premise about the capacities of human minds. It implies that any mind can recognize the consistency of PA, ZFC, etc (assuming they are consistent). But this is at least questionable, and certainly is not implied by Gödel's Incompleteness Theorem. E.g., if you said to a mathematical logician "I can see that G for ZFC is true", they would say "How do you *know* that ZFC is consistent?". If you could provide some special way of mentally seeing that ZFC is consistent which also generalized to any formally specified system, then you would be home and dry. But that's the real point. (If you could only show that some single system, like ZFC, is consistent, the malicious mechanist could conclude that you are really represented by a formal system F which happens to be stronger than ZFC, and so contains CON(ZFC) as a theorem).

What Gö:del's 1st Theorem does imply is:

(2) The set of true arithmetic sentences is not axiomatizable (cannot be generated by a sound system).

What I pointed out in my Mind article was that,

(3) Given a formal system F, there is a stronger system F+, which (contains the axioms for Tarski's theory of truth), and F+ proves the gödel sentence G for F. The natural reasoning to the truth of G is indeed, via the truth of F. Oddly enough, a formal system cannot prove its own truth. (This is odd because " "snow is white" is true" seems to follow from "snow is white").

So: I think the argument needs augmentation to provide support for the claim

(4) The set of arithmetic sentences that the mind can recognize as true (call this set TM) is not axiomatizable.

If we had empirical evidence that TM was not axiomatizable, then we would be home and dry: the mind is not represented by any formal system F. I.e., human mental cogitation is not fully exhausted by algorithmic processes.

(I happen to think that human mental cogitation is not fully exhausted by algorithms. I.e., I think that (4) is true. But what we need to argue further for is this (4), that TM is not axiomatizable. I do not believe that (4) follows from G's theorems, or even from philosophical reflection on G's theorems. Versions of (1) do follow from (4) however).

Best wishes - Jeff Ketland