Like you and Professor Penrose, I think that formal algorithmic systems will turn out not to represent the human mind. But I do not think that the argument, as presented thus far in 1961 (and 1989/1994 by Penrose), provides support for the crucial premise:
(1) For any consistent (true) formal system F, the mind can recognize the consistency (truth) of F
Of course, if (1) is true, then the mind can always "go one better" than any such F. Having recognized the consistency of F, there is a simple deductive proof of G (and thus of G's truth). But Gödel's 1st Incompleteness Theorem itself doesn't imply (1). Neither does reflection on Gouml;del's theorem.
Indeed, (1) is a strong premise about the capacities of human minds. It implies that any mind can recognize the consistency of PA, ZFC, etc (assuming they are consistent). But this is at least questionable, and certainly is not implied by Gödel's Incompleteness Theorem. E.g., if you said to a mathematical logician "I can see that G for ZFC is true", they would say "How do you *know* that ZFC is consistent?". If you could provide some special way of mentally seeing that ZFC is consistent which also generalized to any formally specified system, then you would be home and dry. But that's the real point. (If you could only show that some single system, like ZFC, is consistent, the malicious mechanist could conclude that you are really represented by a formal system F which happens to be stronger than ZFC, and so contains CON(ZFC) as a theorem).
What Gö:del's 1st Theorem does imply is:
(2) The set of true arithmetic sentences is not axiomatizable (cannot be generated by a sound system).
What I pointed out in my Mind article was that,
(3) Given a formal system F, there is a stronger system F+, which (contains the axioms for Tarski's theory of truth), and F+ proves the gödel sentence G for F. The natural reasoning to the truth of G is indeed, via the truth of F. Oddly enough, a formal system cannot prove its own truth. (This is odd because " "snow is white" is true" seems to follow from "snow is white").
So: I think the argument needs augmentation to provide support for the claim
(4) The set of arithmetic sentences that the mind can recognize as true (call this set TM) is not axiomatizable.
If we had empirical evidence that TM was not axiomatizable, then we would be home and dry: the mind is not represented by any formal system F. I.e., human mental cogitation is not fully exhausted by algorithmic processes.
(I happen to think that human mental cogitation is not fully exhausted by algorithms. I.e., I think that (4) is true. But what we need to argue further for is this (4), that TM is not axiomatizable. I do not believe that (4) follows from G's theorems, or even from philosophical reflection on G's theorems. Versions of (1) do follow from (4) however).
Best wishes - Jeff Ketland