What is the Ockham Society?

The Ockham Society provides a forum in which graduate students in philosophy (particularly BPhil, MSt, and PRS students) may present their ideas to their peers at the University of Oxford. Our aim is to provide every Oxford graduate student with the opportunity to present their ideas in a friendly environment at least once during their time in Oxford. It is an ideal opportunity to gain feedback on your essays, and to gain first experiences in academic presenting. Small, experimental and unfinished papers are just as welcome as more advanced ones.

If you would like to present a paper to the society please send a title and abstract of 150 words maximum to Sean Costello (firstname.lastname(at)philosophy.ox.ac.uk). Oxford DPhil Philosophy students are highly encouraged to present at the DPhil seminar.

Ockham Society will take place online via Zoom during Trinity, Friday 13:00-14:00 UK time. An URL will be distributed via Email, write Richard Roth if you didn't receive it.

Programme for Trinity 2020

Week 1
1 May
Chair: Pietro Cibinel
Paul de Font-Reaulx (Trinity)
The Cognitive Capacity Question for Robust Moral Realism

Robust moral realists hold that some of our moral beliefs are true in virtue of irreducible normative facts, and most of them believe that we have a capacity to somewhat reliably form true beliefs about such normative facts. Here is a question: If we have such a capacity, how have we qua biological beings come to possess it? The by far most plausible and popular response—recently defended by Derek Parfit amongst others—is the Universal Byproduct Response, which states that we have developed a reliable capacity for moral reasoning as an evolutionary byproduct of a general capacity for reliable a priori reasoning. In this talk I argue that, despite its attractiveness, the Universal Byproduct Response is indefensible. More specifically, I argue that the claim that moral reasoning is a byproduct of other cognitive capacities is committed to hidden empirical theses regarding the cognitive mechanisms we exercise in moral reasoning. Furthermore, these theses appear by all empirical lights to be false. I conclude that in spite of these issues the Universal Byproduct Response might still be the best response to the Cognitive Capacity Question, which raises questions regarding the prospects of a defensible epistemology for robust realists.

Week 2
8 May
Chair: Richard Roth
Pietro Cibinel (Jesus)
Information Value as Accuracy Gain

Suppose you aim to settle a certain question. How can we measure the usefulness of getting certain answers, or asking certain other questions, relative to that aim? Various models of information value have been proposed, building on insights from decision theory, Bayesian statistics, and information theory. In this paper, I develop and defend a novel model of information value in terms of accuracy gain. I show how the model fares better than its contenders on a number of fronts: it solves a problem concerning synchronic vs. sequential learning; it accounts for the value of surprisal, of the exclusion of possibilities, and of changes in belief; it must be assumed, at least at the objective level, to account for the costs of misleading evidence; and it promises to meet basic standards of empirical adequacy.

Week 3
15 May
Chair: Lucas Didrik Haugeberg
Steven Diggin (Merton College)
Inference to the Best Justification

This is a paper about the methodology of normative ethics. It is also about the metaphysical structure of normative reality. The paper begins by outlining several problems with the method of ‘reflective equilibrium’; it is unhelpful, insubstantial and metaethically uninteresting. As the Cornell Realists already made clear thirty years ago, the only interesting and adequate methodology of normative ethics must involve the notion of explanation. The standard objection to this proposal is that normative facts do not feature indispensably in causal explanations of natural phenomena. Whatever - these are not the only kinds of interesting explanations. In particular, facts about normative reasons are indispensable to the justification of actions, which is a kind of explanation. First, facts about normative reasons explain facts about what actions people ought to do, as has been argued by Johns Broome and Hyman. This is enough to give a substantive methodology of normative ethics and a principled way to resolve normative disagreement. Second, facts about normative reasons explain why people actually act as they do, as has been argued by Jennifer Hornsby, John McDowell and Joseph Cunningham. This forms the basis of a new argument for metanormative realism. Finally, the paper applies the method of Inference to the Best Justification to some longstanding problems in normative ethics, involving action-guidingness and usability. This suggests that the well-worn distinction between decision procedure and criterion of right is worthless, and so gives a new version of the old problem with certain kinds of Consequentialism.

Week 4
22 May
Chair: Maurice Grüter
Livia von Samson-Himmelstjerna (St. Peter's)
Critical Genealogy and Emancipatory Change

Amia Srinivasan recently argued that critical genealogy is a guide to what she calls »worldmaking: the transformation of the world through a transformation of our representational practices« (Srinivasan 2019, 145). This claim is significantly stronger than two predominant claims about genealogy: First, it has often been argued that inquiry into the origins and development of our representations dismantles their seeming inevitability and puts their mutability on display. Second, it has been argued that genealogical inquiry may render hidden functions of our representations visible, so that they may, in a second step, be normatively assessed. Stronger claims about the significance of backward-looking genealogy to forward-looking emancipatory change seem to confuse the explanatory and the normative: Why should the answer to the question of how our representations developed tell us anything about how to intervene in them? Genealogy, it is thought, may advance the understanding of our representations, but is only preparatory, not constitutive of normative assessment or practical change. Taking these objections as a starting point, the paper aims to clarify Srinivasan’s claim by comparing her account of critical genealogy to Rahel Jaeggi’s transformative notion of immanent critique.

Week 5
29 May
Chair: Steven Diggin
Denis Kazankov (St. Cross)
What is Do(d/g)gy about Thin Centralism?

In metaethics, it is common to distinguish between thick and thin concepts in the following manner: thick concepts have both descriptive as well as evaluative content, thin concepts only have evaluative content. Many theorists then take it for granted that the evaluative content of thick concepts is exhausted by the complete content of thin concepts. This effectively allows them to endorse two claims which together give rise to the view called ‘thin centralism’.

(1) Thick concepts are discreetly composed of a thin concept plus a descriptive component.

(2) Thin concepts are conceptually and explanatorily prior to thick concepts.

My talk will be a counter-argument to this view. Firstly, I will posit conditions that a concept have to meet to qualify as conceptually and explanatorily prior to another concept:

(C) A concept X is conceptually prior to a concept Y only if it is impossible to understand Y without understanding X but not vice versa.

(E) A concept X is explanatorily prior to a concept Y only if (i.) The explanatorily prior concept applies to all objects that are believed to belong to the extension of the explanatorily posterior concept and must help to distinguish these objects from extensions of all exclusive concepts. (ii.) There is a non-arbitrary relation between the explanatorily prior concept and explanatorily posterior concept that helps to explain why the objects in the extension of the latter concept have been grouped together.

Upon that, I will argue that thin concepts, under both cognitivist and non-cognitivist interpretation, struggle to satisfy these conditions in relation to thick concepts. I will conclude my argument by sketching an alternative way of how evaluations can affect the semantics of thick concepts even if they do not occur in their definitions by restricting how thick concepts can be defined.

Week 6
5 June
Chair: Yiyang Gao
Yuming Liu (St. Cross)
The interpretation of goodness in the eighteenth century Chinese philosophical texts, the Xunzi

This presentation focuses on the interpretation of a single term, goodness (shan 善). Eighteenth-century scholars are famous for their rigid disciplines of reading philosophical texts. However, the interpretation of goodness in the Xunzi was an anomaly for it escapes the acknowledged hermeneutic rules at that time. This presentation first reasons that such an abnormal phenomenon should not be mistaken as an occasional case by comparing the goodness's interpretation in other texts contemporary with the Xunzi. It then proposes a possible explanation that the eighteenth-century scholars were rather contributing to the lineage of Chinese philosophy in an unprecedented way; and the fruits of their endeavour will be shown at the end of this presentation.

Week 7
12 June
Chair: TBA
Sasha Arridge (Mansfield)
The Terminus of Normative Explanation: A Radical Value-First Account

An explanation, in order to be complete, must come to a satisfactory end. As Nozick observed in Philosophical Explanations, this terminus of explanation must take the form of a fundamental fact, where this fact is either basic, in that it is simply true and admits of no further explanation, or self-explaining, in that it somehow explains itself. Normative explanations are no different: at the terminus of all normative explanations there must be some fundamental normative fact. The nature of fundamental normative facts is the subject of this paper.

Roughly, there are two kinds of normative fact: evaluative and deontic. Evaluative facts tell us how we should want the word to be; deontic facts tell us what actions we may perform. Paradigmatic evaluative facts are facts about goodness; paradigmatic deontic facts are facts about reasons for action, which determine the rightness or wrongness of actions. The question now arises: What kind of fact appears at the terminus of normative explanations? Broadly, three answers have been proposed: i) that fundamental normative facts are evaluative, ii) that they are deontic, or iii) that they can be either deontic or evaluative. Represented diagrammatically, we get the following options:

  1. N → G → R
  2. (N → G) = R
  3. N1→ G, N2→ G

where G is goodness, R is reasons for action, N is some natural property, and → is a grounding relation. This paper argues that none of these options are satisfactory, however. After critiquing these views, this paper proposes a new account according to which facts about value jointly ground both evaluative and deontic facts; facts about value, properly understood, are neither deontic nor evaluative in themselves, but serve to ground both kinds of fact. On this view, facts about value are the fundamental normative facts:

Radical Value-First Account: N → V, V → R, V → G

The majority of this paper is devoted to sketching the contours of this view, and explaining how it avoids well-known problems facing the other three views.

Week 8
19 June
Chair: Sam Williams
Alexander Read (St. Hilda's)
Knowing on Occasion

Standard models of epistemic contextualism are indexical, comprising a metalinguistic claim that the content of ‘knows’ varies with the context of utterance, and a semantic explanation of how this is so (e.g. that the verb somehow behaves like a gradable adjective, modal, or generalised quantifier). In this paper, I develop a model of non-indexical epistemic contextualism based on occasion-sensitive semantics. The content of ‘knows’ is invariant, but its extension can vary with context depending on the goals of the ascribers.

I argue that there are several advantages to this model. First, treating ‘knows’ as non-indexical patterns with data concerning the interaction of degree modifiers with adjectival participles and proposition-embedding verbs. Second, cases typically used to motivate epistemic contextualism are subsumed under the general category of ‘Travis cases’, instances where sentences containing no indexical predicates vary in truth-value depending on the context of utterance. Finally, the framework is ‘neutral’; a variety of epistemic conditions can be implemented to explain how goals affect the extension of ‘knows’. Goals might introduce relevant counterpossibilities for a subject to rule out (Lewis, 1996) or they might determine how far throughout modal space a subject's belief must match the facts (DeRose, 1995).

c