Mathematical Methods in Philosophy
19th to 21st September 2008
This is the fourth in a series of meetings exploring mathematical methods in epistemology, semantics, theories of truth, and philosophy of mathematics. It is jointly supported by the British Academy, the London Mathematical Society, and the British Logic Colloquium (with further financial support from the University of Bristol).
All talks will be in the Frank Lecture Theatre Physics Department, Tyndall Avenue
The Physics Department is numbered 33 on this map.
There is a registration fee of £20 with a reduced fee of £10 for students and postgraduates. There are grants for postgraduates to cover the registration fees and the travel and accommodation costs from the London Mathematical Society.
The Department of Mathematics has another webpage for this meeting.
Please contact Helen Craven +44 117 928 7978 (firstname.lastname@example.org) for technical/admin questions or for help concerning the Conference.
Contact: Philip Welch
Quasi-inductive definitions are a pattern for transfinitely defining sets of natural numbers which was extracted by J. Burgess from Herzberger, Belnap and Gupta Revision theory of truth. The most interesting aspect of constructions of this sort is the fact that they can be regarded as a next natural step beyond the usual inductive type. We present an axiomatic framework comprising the basic properties of quasi-inductive definitions, in a similar manner as it has been done for first-order monotone and nonmonotone inductive ones. Then, the problem of the proof theoretic strength will be addressed, and some initial result on the bounds for it (the upper one being given in terms of a theory of sets extending KP) will be discussed. In our final remarks, we then point out those issues to be dealt with in the near future, comprising some which go back to the original motivations of the revision theoretic proposal.
Deflationists hold that truth is not substantial and minimal. It has been suggested that this non substantiality of truth commits a deflationist to a conservative theory of truth. There are very different axiomatic theories of truth which are conservative extensions of PA. We will show how the notion of interpretability can be used to give a characterization of different conservative theories and their properties. For this the property of reflexivity will play an important role. We will show that for conservative theories of truth interpretability in PA and reflexivity coincide.
Tarski rejected an axiomatisation of truth that is based on the T-sentences only, because, according to Tarski, such a theory is deductively too weak. Many logicians and philosophers have follwed Tarski's verdict and concentrated on compositional axiomatisations because they obviously overcome the problem of deductive weakness. Tarski's assessment, however, is based on his method to solve the liar paradox, that is, on the typing of the truth predicate and the strict distinction of object- and metalanguage.
If the type restriction is relaxed and the truth predicates is allowed to apply to some sentences containing the truth predicate, the situation changes completely. McGee has proved that over a weak base theory any theory can be re-axiomatised as a set of T-sentences. In particular, compositional axiomatisations can be axiomatised by T-sentences only. The T-sentences employed for McGee's trick aren't natural, so that the re-axiomatisation is hardly well motivated.
I will look at the T-sentence T[A]<->A where A is allowed to occur only in an even number of negation symbols and where A may contain free variables. This theory can define strong compositional truth predicates. Moreover, in contrast to some popular truth theories, it can be consistently closed under necessitation and conecessitation.
In this presentation the prospects of deflationism about the concept of truth are investigated. A new version of deflationism, called inferential deflationism, is articulated and defended. This version of deflationism holds that truth is essentially an inferential notion and that there are no unrestricted laws of truth. It is argued that inferential deflationism avoids the pitfalls of earlier deflationist views such as Horwich's minimalist theory of truth and Field's version of deflationism.
Zermelo, in his paper “Über Grenzzahlen und Mengenbereiche” (1930), established the structure of the cumulative hierarchy of sets, as characterized by the Zermelo-Fraenkel axioms. My talk will focus not on what Zermelo showed about the categoricity of the hierarchy up to a given level but on what he said about the open-endedness of the levels, and what this tells us about the concept of set.
In his classic work The Logical Syntax of Language (1934) Carnap defends three distinctive claims: (1) The thesis that logic and pure mathematics are analytic and hence without content and purely formal. (2) A radical pluralist conception of pure mathematics (embodied in his Principle of Tolerance) according to which we have great freedom in choosing the fundamental postulates. (3) A minimalist conception of philosophy on which most traditional questions are rejected as pseudo-questions and philosophy is identified with the study of the logical syntax of the language of science. Carnap's discussion is quite sophisticated metamathematically and his position is quite subtle. Indeed I think it is the most sophisticated defense of pluralism in mathematics that has appeared to date and that is my reason for concentrating on it. I will begin by criticizing both Carnap's radical form of pluralism and his minimalist conception of philosophy. I will then turn to the question of what it would take to establish pluralism in mathematics. This will involve describing various "bifurcation scenarios" in set theory that are made available by some recent results (joint with Hugh Woodin).
We will give an overview of how probability relates to truth, from a logical point of view. On the probabilistic side, this leads us to an investigation of (unary) absolute probability measures and (binary) conditional probability measures on propositional or first-order languages. Amongst other topics, we will show (i) what a truth-conditional AND probabilistic semantics for conditional logic looks like, (ii) in what sense logical truths with nested conditionals correspond to probabilistic reflection principles, (iii) how a probabilistic theory of truth for semantically closed languages can be developed, and finally (iv) how such a theory can be extended to a probabilistic theory for truth AND probability.
For a large part of the twentieth century, the following view was widely accepted in philosophy: sentences such as 'The number two is green' (i.e. those sentences standardly labelled as 'category mistakes') are meaningless. Numbers, the thought went, are the wrong kind of thing to predicate colours of, and consequently such "type confusions" result in meaningless sentences. While these days the view that (standard) category mistakes are meaningless is still held by several philosophers, most seem to think that the view is wrong: category mistakes are perfectly meaningful and the view that they are meaningless is no more than an old-fashioned dogma.
However, there remains another kind of type confusion - one that results when we replace an expression of one grammatical category (e.g. a name) in a sentence with an expression belonging to another (e.g. a predicate). For example by replacing the name 'John' in the sentence 'John runs' with the predicate 'eats', we get the ungrammatical string 'eats runs' (I label such strings 'grammatical type confusions'). Interestingly, the claim that grammatical type confusions are meaningless seems to be generally taken for granted and is rarely questioned.
Are there good reasons for taking grammatical type confusions to be meaningless, or is this yet another philosophical dogma - the last remaining dogma of type confusions? The aim of my paper is to discuss this question. I explore a variety of potential reasons for taking grammatical type confusions to be meaningless, and argue that they are not conclusive. I conclude that we should at least be willing to question this last dogma of type confusions.
Many philosophers will claim that Goodman's GRUE Paradox effectively sounded the death knell for Carnap and Johnson's Inductive Logic programme of showing that degrees of belief (i.e. subjective probabilities) are determined by purely logical considerations on the available evidence, and in that sense are essentially 'objective'. As a consequence of this disenchantment the approach to the problem of assigning beliefs via considerations of logical or rational principles of uncertain reasoning largely stagnated in the latter half of the twentieth century.
I would claim however that not only is this capitulation unjustified but that subsequent advances in the formalism, understanding and methods of predicate probability logic provide the opportunity for a revitalization of the original programme as a topic within mathematical logic but openly welcoming and encouraging philosophical input.
Hilbert’s Programme sees the more theoretical (‘ideal’) parts of mathematics as an instrument for proving truths in the more elementary (‘real’) parts. Gödel’s Second Incompleteness Theorem shows that one cannot prove the consistency of ideal mathematics from within real mathematics and hence is generally thought to destroy the programme. My talk takes issue with the latter verdict. By examining non-deductive reasoning in mathematics, I explain why a rational reconstruction of Hilbert’s Programme survives Gödel’s second theorem. The moral applies to other forms of mathematical instrumentalism.
The fundamental problem of epistemology is to say when the evidence in an agent's possession justifies the beliefs she holds. In this talk, I present work carried out jointly with Hannes Leitgeb, in which we defend the Bayesian epistemologist's solution to this problem by appealing to the following fundamental norm:
ACCURACY: Try to minimize inaccuracy in your beliefs.
To make this norm mathematically precise, we describe three epistemic dilemmas that an agent might face if she attempts to follow ACCURACY, and we show that the only inaccuracy measures that do not give rise to such dilemmas are the quadratic inaccuracy measures. Then, we derive the main tenets of Bayesianism epistemology from the mathematical version of ACCURACY to which this characterization of the legitimate inaccuracy measures gives rise. We also consider Richard Jeffrey's attempt at generalizing Conditionalization. We show not only that his rule cannot be derived from the norm, but further that the norm reveals it to be illegitimate. We end by deriving an alternative updating rule for those cases in which Jeffrey's is usually supposed to apply.
Call mereological harmony the thought that certain mereological relations on material objects mirror and are mirrored by mereological relations on regions of space. While many agree that classical extensional mereology gives us an adequate description of the mereological structure of space, there is much disagreement as to whether material objects are governed by its principles. However, even non-extensional mereologists have been attracted to mereological harmony in one form or another. The paper will attempt to characterize the core of mereological harmony that is acceptable to non-extensional mereologists and ask whether it places any substantive constraints on the structure of space. It will argue, for example, that even very weak versions of mereological harmony will make Euclidean space an inhospitable environment for atomless gunk. Finally, it will look at different motivations for mereological harmony and ask whether -- and to what extent -- this thought may be negotiable.
In the talk we will discuss phase transition phenomena for first order Peano Arithmetic PA. To this end we consider several natural parameterised assertions which are PA-provable for small parameter values and which together with their negation are PA-unprovable for large parameter values. We explain the mathematical principles which are used to classify resulting thresholds. Our examples stem mainly from Ramsey theory, WQO-theory and the theory of well-orders.
We give for the most part an expository talk on perfect information games with some recent examples applied to theories of truth, generalised inductive definitions, and of identity etc. These occur for the most part at the simplest end of the arithmetical hierarchy. It is often thought that the existence of a strategy for such games implies some kind of new or concrete epistemic access to the sets being characterised - in particular maybe for open games where one player may win after finitely many moves. It is possible to give characterisations of, say a complete quasi-inductive definable set, in a an open game theoretic manner, using however generalised quantifiers. Whether this produces any greater insights/understanding of the sets characterised is pehaps debatable.
last change: 13 May, 2006
e-mail address (please replace "0" by the usual "@" symbol): volker.halbach at philosophy.oxford.ac.uk