What is the Ockham Society?
The Ockham Society provides a forum in which graduate students in philosophy (particularly BPhil, MSt, and PRS students) may present their ideas to their peers at the University of Oxford. Our aim is to provide every Oxford graduate student with the opportunity to present their ideas in a friendly environment at least once during their time in Oxford. It is an ideal opportunity to gain feedback on your essays, and to gain first experiences in academic presenting. Small, experimental and unfinished papers are just as welcome as more advanced ones.
If you would like to present a paper to the society please send a title and abstract of 150 words maximum to Sean Costello (firstname.lastname(at)philosophy.ox.ac.uk). Oxford DPhil Philosophy students are highly encouraged to present at the DPhil seminar.
Programme for Michaelmas 2018
We meet Fridays 1 - 3 pm in the Radcliffe Humanities Building, Ryle Room.
According to Kant’s transcendental idealism, two sources are responsible for experience. First are the intuitive and conceptual form-imposing structures that belong to the agent’s mind; and, second, are the ‘things in themselves’ that give to the mind the ‘matter’ which it synthesizes through its form-imposing structures.
Many commentators have objected against this transcendental idealist picture of experience based on its reliance on the idea of a ‘noumenal affection’ (causal affection by things in themselves). The worry is that noumenal affection seems blatantly incompatible with noumenal ignorance (Kant’s dictum that we cannot know things in themselves). This seeming incompatibility between two fundamental claims of Kant’s doctrine is known as the “problem of noumenal affection”, and the problem threatens the very coherency of transcendental idealism.
In this presentation, I’ll show that under the standard interpretation of noumenal affection as causal affection, the prospects of an adequate solution are bleak. This is because the concept of causality not only involves something which Kant claims we know (in the empirical case) but, more importantly, it inherently suggests a specific kind of metaphysical relation. The problem becomes a threat precisely because it is incoherent to categorically deny knowledge of a relata (things in themselves) if we know what relation it operates in (causality).
Instead, I’ll argue that we have good textual and philosophical motivations for thinking that the standard interpretation of noumenal affection is incorrect. In its place, I’ll propose my own account I call the “grounding interpretation”. Under my account, we can read Kant successfully as denying knowledge of the very relation involved in noumenal affection, and I show that my account helps us resolve many of the obstacles that were present under the standard interpretation.
Émilie du Chátelet was a French natural philosopher of the Enlightenment. Her magnum opus, the Foundations of Physics (1740), has only been fully translated into English in the last few years. These chapters have therefore received very little attention in the English-language philosophical literature. In this paper, I analyse Du Châtelet's argument in Chapter 5 of the Foundations, 'Of Space', of which Brading (2018) has already recognised the importance.
I formalise the argument Du Châtelet offers in this chapter, which argues for the "interesting and highly unusual" claim that we necessarily perceive numerically distinct simples as spatially extended. This accounts for the spatial extension of bodies. I then contend, contra Stan (2018), that Du Châtelet's account leads to a pervasive idealism about both bodies and space. I illustrate how Du Châtelet's idealism is an improvement on Leibniz and Wolff in terms of how it accounts for the emergence of extension from extenionless simples.
In this talk I shall discuss a few ideas concerning determinism in quantum mechanics and its ontological and metaphysical status. Given the assumption that quantum mechanics is true, i.e. the theory predicts the correct statistics for the outcomes of experiments, it turns out that no “hidden variable” extension that determines outcomes exactly and spoils any indeterminism has to be necessarily inaccessible. That is, no realistic theory provides a verifiable ontology. As a result, the “hidden variables” could be chosen arbitrarily and their dynamics is not uniquely defined. Further, since realist variables are inaccessible, stochastic jumps of variables rather than a deterministic evolution (e.g. as in Bohmian mechanics) would serve the ontological commitment equally well.
Authorial connectedness, as John Holliday argues, is the reader’s experience or feeling of emotional intimacy to the author in reading a fictional piece. In this essay, I argue that this experience, as characterized by Holliday, lacks the kind of value or importance that makes it indispensable in the proper appreciation or understanding of a fictional piece. The paper is divided into two sections. In section one I consider a range of possible values this experience can have. It may be the case that this experience signifies certain aesthetically (or artistically) valuable quality; or, the fact that a piece is more likely to give rise to this experience in its readers itself may be something aesthetically valuable. This experience may also bear some moral values. Or it may be the case that having this experience is constitutive to a proper understanding of a piece. I argue, however, that all of the above attempts fail, and this experience is of no special value that makes it something we should pursue. In the second part I argue that the intentional pursuance of this experience may in fact be aesthetically harmful, since having this experience requires us to treat the author as, metaphorically put, the biological parent of a piece. And this requirement blocks another mode of interpretation, namely the mode described in Bartes’ famous The Death of the Author, which I argue to be indispensable to a full and proper understanding of a fiction.
The paper is (very much) a work in progress, intended as part of a larger aesthetic and ethical investigation into the uncanny and the dreadful.
I propose that certain uncanny objects are those whose expression is that of what was a human thing that has since bent or grown itself out of, and against, its human form. Imagining the coming to life of such objects is what lends certain fear to the experience of the uncanny: but to experience the uncanny is primarily to undergo a distinct sort of perception (or perceptual experience) of life-form violation. The violation is, however, understood as an integral part of what it is to be Human in another sense.
That is, there are two notions of the human to be considered: (i) the human as the humane and mundane, against which uncanny being is contrasted; and (ii) the Human as that life-form which contains within itself both humanity (with a small ‘h’) and, naturally, the principle by which humanity is thrown off. This principle is not one of haphazard mutation or error but has a strict logic and reliance upon rational and sentimental understanding: ‘to violate that which is Law, merely because we understand it to be such’ (Edgar Allen Poe, ‘The Black Cat’).
I draw on some remarks and ideas in Kant’s Critique of Teleological Judgment, Philippa Foot and Michael Thompson on natural-historical judgments, and principally the language and imagery of passages from Edgar Allen Poe’s ‘The Black Cat’ to articulate this vision of what the uncanny can be. I will end with some remarks about the importance of stylistic and literary form to such articulation (this has a broader bearing on issues about philosophical method).
According to internalism about practical reasons, our normative reasons for action fundamentally depend on what we desire, care about, or value. According to what I call internalism about reasons as such, this is true of all our normative reasons (e.g. epistemic reasons). In this essay I show that some, if not all, internalists about practical reasons should either become internalists about reasons as such or abandon internalism altogether.
I start (§1) by locating my target audience: those who accept internalism as an account of what I call (for lack of a better term) the normativity of practical reasons. According to them, internalism provides the best explanation for why and how some considerations make it true that we should or ought to act in certain ways.
I then (§2) make a presumptive case for my view: the same questions these internalists raise about the normativity of practical reasons can be raised about the normativity of all normative reasons; the same answers they give in favour of internalist responses in the practical case can also be given in favour of internalist responses in the non-practical cases; moreover, linguistic, practical, and phenomenological considerations give us (internalist or otherwise) reason to treat all normative reasons as a single kind for the purposes of explaining their normativity; so these internalists should presume that either some form of internalism about reasons as such is true or all forms of internalism are false.
In §3 I discuss and refute a series of immediate objections to the presumptive case. I then (§4) turn to a series of stronger objections to my view, according to which any presumption there may be in favour of my view is defeated by consideration of the metanormative implications of some distinctive feature of (voluntary or intentional) action, practical reasoning, or practical reasons. Focusing on arguments by Cowie, Markovits, and Parfit, I argue that the best of these objections to date fail. So my internalists about practical reasons should presume that either internalism about reasons as such is true, or internalism about practical reasons is false.
I end (§5) by suggesting this is good, if bitter, news for internalism: unpopular as this view may be in this age of realisms, we’re better off embracing the challenge of moving towards internalism about reasons as such.
Actions can be morally right while lacking moral worth. Kant’s prudent shopkeeper, for example, treats his customers fairly to maximise his long term profit. Surely the fair treatment of his customers is the right thing to do. However, we certainly do not think that the shopkeeper’s actions have moral worth. But what makes it the case that right actions have moral worth? The aim of this paper is to answer this question. The structure: After rejecting the right-reasons-account of moral worth (§ 1), a new account of moral worth is stated and defended (§§ 2-3).
(§ 1) It is often argued that a right action has moral worth iff it was done for the right reasons. And, in fact, Kant’s shopkeeper obviously didn’t act for the right reasons. However, despite this initial appeal, the right-reasons-account faces two problems: First, depending on one’s exact understanding of right reasons, the account is either too restrictive or too liberal. Second, the right-reasons-account fails to get the right verdicts on degrees of moral worth. In particular, it cannot explain the influence of differences in effort on an action’s degree of moral worth.
(§ 2) According to the new account, a right action has moral worth iff it is non-accidentally right. Non-accidental rightness is analysed in terms of counterfactual robustness. The larger the proportion of worlds in which the agent performs the right action, the less accidental is its rightness. In this sense, Kant’s shopkeeper only accidentally did the right thing. But the new account is not only intuitive, it also avoids both problems of the right-reasons account. It’s neither too restrictive nor too liberal, and it provides the right verdicts on degrees of moral worth.
(§ 3) Against the new account one might raise three objections. The first two objections reject its sufficiency and the necessity, respectively. Both objections can be deflected by referring to the account’s technical details. The third objection voices the epistemological worry that we could never know whether a right action has moral worth. Yet the features by which we come to know that an action has moral worth don’t have to be the same features which make an action morally worthy. And an agents motivation is a reliable indicator of moral worth.
The aim of this paper is to pose a challenge to the linguistic distinction between ‘predicative’ and ‘attributive’ adjectives which Geach uses in his influential 1956 paper ‘Good and Evil’. I argue that Geach’s distinction does not point towards any substantial semantic difference between the two kinds of adjective, and that as a consequence Geach’s moral naturalism is left in want of a justification.
Geach draws a logical distinction between adjectives which are ‘predicative’ and those that are ‘attributive’. Predicative adjectives identify a feature in the world and ascribe this to certain nouns; (e.g. if I describe ‘a red x’ I indicate that x is a red thing). On the other hand, attributive nouns only identify features relative to the sort of thing that the noun is; (e.g. ‘a small x’ doesn’t tell us that x is a small thing, since x might only be small given the sort of thing x is). For Geach, this distinction has important upshots for our moral epistemology. He argues that the adjectives ‘good’ and ‘evil’ are always and only attributive adjectives. Hence, if I call someone ‘a good person’, I cannot assess the truth of this claim simply by looking for some trait or quality of goodness which they may have. Rather, I have to look to the what sort of thing it is that I am describing – in this case, I need to ask what it is to be a person.
In this essay I want to pose a challenge for the predicative/attributive distinction that Geach uses. In particular, I suggest that the distinction should be seen as a difference in degree, rather than a difference in kind (as Geach argues). I do so by introducing the notion of a semantic reference class, (hence SRC), which determines the meaning and valence of descriptions. For Geach’s distinction to indicate a real logical difference between two types of adjective, I argue, it has to be the case that all and only attributive adjectives depend on SRCs. However, this does not seem to be the case – a number of examples of paradigmatically predicative adjectives also depend on SRCs. (Perhaps it’s unclear whether there can be any adjectives which do not depend on SRCs).
This has interesting implications for Geach’s moral epistemology.
Recent empirical studies show that young children produce and comprehend generic sentences far quicker and more readily than explicit quantification. These facts pose a prima facie problem for an orthodox treatment of generics according to which they involve a generic operator called 'Gen' which is analysed in terms of quantification over a restricted domain of individuals, (parts of) worlds, or histories. Leslie (2007, 2008) and Gelman (2010) resolve this problem by arguing that generics give voice to an innate, default mode of generalising that is postulated to exist in the cognitive system. This paper develops an explanation of the acquisition data that is compatible with the orthodox approach.
The most important difference between what is sometimes called the “traditionalist” just war view and that of “revisionist” just war theory is the moral equality or inequality of combatants. On Walzer’s traditionalist view, combatants are liable to defensive harm because they cause harm—whether or not the harm they cause is justified. According to revisionist just war theory, justified harms caused in war are a species of defensive harms more generally. On this view, combatants on the unjust side (or on an unjust side) violate the rights of those they harm and are therefore liable to defensive harms. Combatants on the just side, however, are not liable to defensive harms.
There are two elements that are relevant to this discussion that have yet to receive the attention they deserve. The first is a distinction between killing in war as self-defensive and killing in war as other-defensive. I rely upon the claim, though I do not argue for it here, that killing in war is nearly always justified on other-defensive grounds, even when it appears to be self-defensive. I also admit the widely accepted asymmetry that, all else equal, self-defensive harms are permissible but not obligatory while other-defensive harms can be obligatory. Second, the responsibility view of liability relies upon the claim that only voluntary actions render an agent liable to defensive harms. But there is an important question about the sense in which the action must be voluntary. I argue that an agent who justifiably believes that some action φ is morally obligatory does not act voluntarily in the relevant sense when she does φ. She therefore remains immune from defensive harming. This argument, together with the claim that killing in war is justified on other-defensive grounds, yields the following conclusion: A combatant who reasonably believes his war to be just, and therefore reasonably believes that he acts in justifiable other-defense, also reasonably believes that he has a moral obligation to cause harm. Failing to meet the voluntariness criterion of the responsibility view, such a combatant does not make himself liable to defensive harms. In some sets of epistemic circumstances, the result is a new moral equality of combatants according to which combatants on both sides are immune from attack, and yet, combatants on both sides are all things considered permitted to harm one another.
It is tempting to analyse harm as follows: I harm you if and only if, had I not acted, you would have been better off. This analysis, however, runs into several well-known difficulties. I will focus on two: the preemption problem and the non-identity problem. Both problems arise when intuitively I have harmed you, but you would not have been better off otherwise: in the first kind of case, because I have preempted some equal or worse harm; in the second kind, because you would not have existed had I not acted.
These problems are sufficiently dire that a non-comparative conception of harm has been proposed: roughly, that I harm you if and only if I cause you to occupy some non-comparatively bad state. But it is unclear whether the non-comparative view of harm successfully avoids the non-identity problem, and in any case it goes badly wrong in a wide range of ordinary cases (for instance, I do not harm you by causing you to occupy the state 'has bad eyesight' if you were previously blind).
In this talk, I explain the error in the counterfactual view of harm, and show how it can be repaired. The resultant analysis clearly avoids, I will suggest, the preemption problem (without misclassifying any other ordinary cases). I will also argue that the proposed analysis, more importantly, leaves us in a considerably better position with respect to the non-identity problem, and sketch a solution to the residual complications.
Moral ignorance refers to cases in which one does not know whether an act is permissible. Some philosophers have recently posited moral vagueness, which refers to cases in which one does not know whether an act is permissible, because one cannot, in principle, know whether that act is permissible. The literature on moral vagueness splits over whether it is semantic, epistemic, or ontic. However, nobody has explored the possibility that moral vagueness might not exist.
To fill the gap, I will supply an argument for denying moral vagueness: Mere Ignorance: If moral realism is true, then putative moral vagueness is merely moral ignorance. Putative moral vagueness may characterize an act if the following two conditions are satisfied: (1) License: The act’s being permissible would serve our self-interest; that is, we have a stake in the moral claim’s licensing us. (2) Arationality: The best arguments for or against the permissibility of the act depend on arational assumptions.
While not necessarily endorsing Mere Ignorance, I will consider how it might avoid some pitfalls of positing moral vagueness, and so warrant further examination.
The current debate on reasoning centres on two major questions: I) What is reasoning (or inference)? II) What is correct reasoning? Reasoning is here understood as the mental activity through which we derivatively form beliefs and intentions on the basis of some premises. A dominant view in the debate is the so-called Reasons View, according to which (I) the nature and (II) the correctness of reasoning can be explained in terms of normative reasons. Put roughly, the view is that reasoning is a way of responding to reasons and you reason correctly only if the premises are (good) reasons for the conclusion.
In this paper, I first argue that the Reasons View is underdeveloped. Though many authors assume a close relation between reasoning and normative reasons, few defend it in detail. I then argue that recently proposed accounts are untenable. The prospects of the Reasons View look dim. Reasoning is not essentially responding to reasons, and that the standard of correctness of reasoning does not derive from normative reasons for belief and action. The arguments suggest that we can explain the nature and correctness of reasoning in more fundamental normative terms. I end my exploring this path towards a novel theory of reasoning.
It is widely accepted that the more you desire something, the more it would be rational for you to pursue it. This idea is also often formulated in terms of the following principle: The Strength Principle: Rationality requires of you that you take the means to satisfy your strongest desire.
In this paper, however, I am going to cast some doubt on the prescriptive force of the Strength Principle. I first distinguish between three alternative conceptions of desire strength: desire strength as the decision-theoretical notion of utility, as phenomenological intensity, and as motivating power. As I argue, desire strength as utility lacks prescriptive force because it tracks what we ended up choosing and pursuing, while desire strength as phenomenological intensity has derivative rational relevance only because it often indicates which desire can bring about greater pleasure within us. I then challenge the prescriptive relevance of desire strength as motivating power by considering the following example: As you are removing the damaged unit from the external solar panels on your spaceship, you exert too much force and accidentally detach yourself from the cord that connects you and the spaceship. You cannot help but find yourself gradually pulled away from your spaceship and toward the green planet far on the horizon. You are desperate at first, but you then think that you would rather drift toward somewhere than float aimlessly in space forever anyway. You then hear a strange voice from the audio system in your spacesuit: “You’re entering our realm. We require you to move toward our green planet”.
There is something peculiar about the requirement issued to you. You are already being pulled toward the green planet and cannot help anyway, so the requirement seems futile because your following it cannot be what brings about the required act in the first place. Furthermore, you would rather go to the green planet than anywhere else. So the disappearance of this requirement in your thought would not make any difference. Indeed, it can be argued that the best way to bring about the required act is just for this requirement to disappear in your thought. Can this requirement be a valid prescription for you? No. And I contend the same is the case with the Strength Principle.
Epistemically akratic subjects believe both ‘p’, and ‘my evidence does not support that p’. Such a combination of beliefs seems to exhibit a patent form of irrationality. Indeed, many philosophers think that epistemic akrasia can never be rational. Against this, I argue for the possibility of rational epistemic akrasia by considering cases in which subjects are unsure, and know that they are unsure, what their evidence is. Knowledge iteration failure provides a natural route to these examples. This class of examples has the peculiar property that one cannot know that one is in such a case, thereby gaining immunity from the standard objections against rational epistemic akrasia.
Don Fallis has argued against the orthodox methodology of mathematicians whereby no mathematical proposition is considered officially established until there is a deductive proof of it. Fallis challenges this orthodoxy by arguing that deductive proofs have no unique epistemic virtue that non-deductive proofs always lack. Specifically, Fallis thinks probabilistic proofs can provide just as good, and occasionally better, evidence for mathematical propositions than deductive proofs.
Epistemic Contextualism looks as if it may afford a defence of the orthodox view: perhaps it is due to the epistemic standards of mathematicians that non-deductive methods like probabilistic proofs do not provide sufficient evidence for establishing mathematical propositions. The talk investigates the various successes and pitfalls different kinds of contextualists will run into when attempting to apply this defence.