I am an Associate Professor at the Oxford Philosophy Faculty, and a Tutorial Fellow at Magdalen College. I was previously a Junior Research Fellow at Trinity College Cambridge, a PhD student at MIT, and a BPhil and undergraduate student at Oxford.
Most of my research is in epistemology, sometimes straying into decision theory. I'm particularly interested in debates about epistemic externalism; in issues of 'evidence management' (what kind of control we can/should exert over what evidence we receive); and in their interaction. In thinking about those issues, I've worked on the connection between knowledge and chance, the nature of self-knowledge, epistemic contextualism, and the rationality of risk-aversion.
I also have resarch interests in ethics, metaphysics, and the philosophy of language.
Below are links to my published work (updated October '25). Do let me know if any don't work!
Iterated Knowledge isn't Better Knowledge. Journal of Philosophy, forthcoming.
It is tempting to think that we can measure the quality or strength of someone’s knowledge by the number of iterations it permits: you know p better if you know that you know p, better yet if you know that you know that you know p, and so on. I show that this idea is deeply misguided. Even in set-ups that look maximally friendly to the idea, one can construct cases where someone goes from having available only a single iteration of knowledge that p to having arbitrarily many such iterations, without their knowledge that p becoming better in any way. This has interesting implications for knowledge norms on action, assertion, and inquiry.
[Penultimate Draft]Belief Revision Normalized (with Jeremy Goodman). Journal of Philosophical Logic, 54: 1-49, 2025.
We use the normality framework of Goodman and Salow (2018, 2021, 2023) to investigate the dynamics of rational belief. The guiding idea is that people are entitled to believe that their circumstances aren't especially abnormal. More precisely, a rational agent's beliefs rule out all and only those possibilities that are either (i) ruled out by their evidence or (ii) sufficiently less normal than some other possibility not ruled out by their evidence. Working within this framework, we argue that the logic of rational belief revision is much weaker than is usually supposed. We do so by isolating a natural family of orthodox principles about belief revision, describing realistic cases in which these principles seem to fail, and showing how these counterexamples are predicted by independently motivated models of the cases in question. In these models, whether one evidential possibility counts as sufficiently less normal than another is determined by underlying probabilities (together with a contextually determined question). We argue that the resulting probabilistic account of belief compares favorably with other such accounts, including Lockeanism (Foley 1993), a 'stability' account inspired by Leitgeb (2017), the 'tracking theory' of Lin and Kelly (2012), and the influential precursor of Levi (1967). We show that all of these accounts yield subtly different but similarly heterodox logics of belief revision.
[Open Access Published Version]Fallibility and Dogmatism. Australasian Journal of Philosophy, 103: 23-38, 2025.
The strongest version of the dogmatism puzzle argues that, when we know something, we should resolve to ignore or avoid evidence against it. The best existing responses are fallibilist, and hold that decisions should be governed by underlying probabilities rather than our knowledge. I argue that this is an overreaction: by paying close attention to the principles governing belief-revision, and to subtly different ways in which knowledge can govern decision-making, we can dissolve the puzzle without the need for controversial theoretical commitments. The resulting solution demonstrates fruitful and underexplored points of interaction between 'traditional' epistemology and 'formal' theories of belief-revision, and clears the ground for more systematic theorizing about how and when we should be open to changing our minds.
[Open Access Published Version]Epistemology Normalized (with Jeremy Goodman). Philosophical Review, 132: 89-145, 2023.
We offer a general framework for theorizing about the structure of knowledge and belief in terms of the comparative normality of situations compatible with one's evidence. The guiding idea is that, if a possibility is sufficiently less normal than one's actual situation, then one can know that that possibility does not obtain. This explains how people can have inductive knowledge that goes beyond what is strictly entailed by their evidence. We motivate the framework by showing how it illuminates knowledge about the future, knowledge of lawful regularities, knowledge about parameters measured using imperfect instruments, the connection between knowledge, belief, and probability, and the dynamics of knowledge and belief in response to new evidence.
Accurate Updating for the Risk-Sensitive (with Catrin Campbell-Moore). British Journal for the Philosophy of Science, 73: 751-776, 2022.
Philosophers have recently attempted to justify particular methods of belief revision by showing that they are the optimal means towards the epistemic end of accurate belief. These attempts, however, presuppose that means should be evaluated according to classical expected utility theory; and many maintain that expected utility theory is too restrictive a theory of means-end rationality, ruling out too many natural ways of taking risk into account. We investigate what belief-revision procedures are supported by accuracy-theoretic considerations once we allow agents to be risk-sensitive. We conclude that, if accuracy-theoretic considerations tell risk-sensitive agents anything about belief-revision, they tell them the same thing they tell agents that maximize expected utility: they should conditionalize.
Deference Done Better (with Kevin Dorst, Ben Levinstein, Brooke Husic, and Branden Fitelson). Philosophical Perspectives, 35: 99-150, 2021.
There are many things---call them "experts"---that you should defer to in forming your opinions. The trouble is, many such experts are modest: they're less than certain that they are worthy of deference. When this happens, the standard theories of deference break down: the most popular ("Reflection"-style) principles collapse to inconsistency, while their most popular ("New-Reflection"-style) variants allow for deference to anti-experts. We propose a middle way: deference to an expert involves preferring to make any decision using their opinions instead of your own. In a slogan, deferring opinions is deferring decisions. Generalizing the proposal of Dorst (2020), we first formulate a new principle that shows exactly how your opinions must defer to an expert's for this to be so. We then build off the result of Levinstein (ms) to show that this principle is also equivalent to the constraint that you must always expect the expert's estimates to be more accurate than your own. Finally, we characterize the conditions an expert's opinions must meet to be worthy of deference in this sense, showing how they sit naturally between the too-strong constraints of Reflection and the too-weak constraints of New Reflection.
Avoiding Risk and Avoiding Evidence (with Catrin Campbell-Moore). Australasian Journal of Philosophy, 98: 495-515, 2020.
It is natural to think that there's something epistemically objectionable about avoiding evidence, at least in ideal cases. We argue that this natural thought is inconsistent with a kind of risk avoidance that is both wide-spread and intuitively rational. More specifically, we argue that if the kind of risk avoidance defended by Lara Buchak is rational, avoiding evidence can be epistemically commendable.
In the course of our argument we also lay some foundations for studying epistemic utility, or accuracy, when considering risk-avoidant agents.
Elusive Externalism. Mind 128: 397-427, 2019.
Epistemologists have recently noted a tension between (i) denying an extremely strong form of access internalism and (ii) maintaining that rational agents cannot be epistemically akratic, believing claims akin to 'p, but I shouldn't believe that p'. I bring out the tension, and develop a new way to resolve it. The basic strategy is to say that access internalism is false, but that rational agents always have to believe that the internalist principles happen to be true of them. I show that this allows us to do justice to the motivations behind both (i) and (ii). And I explain in some detail what a view of evidence that implements this strategy, and makes it independently plausible, might look like.
Don't Look Now (with Arif Ahmed). British Journal for the Philosophy of Science, 70: 327-350, 2019.
Good's Theorem is the apparent platitude that it is always rational to 'look before you leap': to gather (reliable) information before making a decision when doing so is free. We argue that Good's Theorem is not platitudinous and may be false. And we argue that the correct advice is rather to 'make your act depend on the answer to a question'. Looking before you leap is rational when, but only when, it is a way to do this.
The Externalist's Guide to Fishing for Compliments. Mind, 127: 691-728, 2018.
Suppose you d like to believe that p (e.g. that you are popular), whether or not it s true. What can you do to help? A natural initial thought is that you could engage in Intentionally Biased Inquiry: you could look into whether p, but do so in a way that you expect to predominantly yield evidence in favour of p. The paper hopes to do two things. The first is to argue that this initial thought is mistaken: intentionally biased inquiry is impossible. The second is to show that reflections on intentionally biased inquiry strongly support a controversial 'access' principle which states that, for all p, if p is (not) part of our evidence, then that p is (not) part of our evidence is itself part of our evidence.
Transparency and the KK Principle (with Nilanjan Das). Noûs, 52: 3-23, 2018.
An important question in contemporary epistemology is whether an agent who knows that p, is also thereby in a position to know that she knows that p. We explain how a "transparency" account of introspection, which maintains that we learn about our attitudes towards a proposition by reflecting not on ourselves but rather on that very proposition, supports an affirmative answer. In particular, we show that such an account allows us to reconcile a version of the KK principle with an externalist or reliabilist conception of knowledge commonly thought to make that principle particularly problematic.
Taking a chance on KK (with Jeremy Goodman). Philosophical Studies, 175: 183-196, 2018.
Dorr, Goodman, and Hawthorne (2014) present a surprising example challenging plausible principles about the interaction between knowledge and chance. Implicit in their discussion is a new argument against KK, the principle that an agent who knows p is in a position to know that he knows p. We bring out this argument, and investigate possible responses for defenders of KK, establishing new connections between KK and a variety of knowledge-chance principles.
Partiality and Retrospective Justification. Philosophy and Public Affairs 45: 8-26, 2017.
Sometimes changes in an agent's partial values can cast a positive light on an earlier action, which was wrong when it was performed. Based on independent reflections about the role of partiality in determining when blame is appropriate, I argue that in such cases the agent shouldn't feel remorse about her action and that others can't legitimately blame her for it, even though that action was wrong. The action thus receives a certain kind of retrospective justification.
Lewis on Iterated Knowledge. Philosophical Studies 173: 1571-1590, 2016.
The status of the knowledge iteration principles in the account provided by Lewis in "Elusive Knowledge" is disputed. By distinguishing carefully between what in the account describes the contribution of the attributor's context and what describes the contribution of the subject's situation, we can resolve this dispute in favour of Holliday's (2015) claim that the iteration principles are rendered invalid. However, that is not the end of the story. For Lewis's account still predicts that counterexamples to the negative iteration principle (¬Kp→K¬Kp) come out as elusive: such counterexamples can occur only in possibilities which the attributors of knowledge are ignoring. This consequence is more defensible than it might look at first sight.
Colloquium Contributions, Handbook Articles, Conference Proceedings, Reviews, etc
A Fragile Compromise: Goldstein on Omega Knowledge without KK. Inquiry, forthcoming.
In Iterated Knowledge, Simon Goldstein develops three pictures of how people might have substantive omega-knowledge despite the falsity of the KK principle. I argue that only one of these pictures, the one associated with a principle Goldstein calls `Fragility', finds a natural home in a broader normality-theoretic conception of knowledge. I then argue that this picture has few robust advantages over one that simply accepts the KK principle, and -- when developed in independently motivated ways -- vindicates this principle for sufficiently sophisticated believers.
[Open Access Published Version]The Value of Evidence. In Maria Lasonen-Aarnio and Clayton Littlejohn (eds) The Routledge Handbook for the Philosophy of Evidence, Routledge, 2024.
Is it always better to know more? Not always, surely. But perhaps in some interesting, independently specifiable set of circumstances. I explain what those circumstances might be; reconstruct an informal argument that evidence is always valuable in those circumstances; and survey some decision theoretic and epistemological objections to that argument.
Belief Revision from Probability (with Jeremy Goodman). In Rineke Verbrugge (ed): Proxeedings of the Nineteenth Conference on Theoretical Aspects of Rationality and Knowledge (TARK 2023), Electronic Proceedings in Theoretical Computer Science 379: 308-317, 2023.
In Goodman and Salow (2021, 2023), we develop a question-relative, probabilistic account of belief. On this account, what someone believes relative to a given question is (i) closed under entailment, (ii) sufficiently probable given their evidence, and (iii) sensitive to the relative probabilities of the answers to the question. Here we explore the implications of this account for the dynamics of belief. We show that the principles it validates are much weaker than those of orthodox theories of belief revision like AGM, but still stronger than those valid according to the popular 'Lockean' theory of belief, which equates belief with high subjective probability. We then consider a restricted class of models, suitable for many but not all applications, and identify some further natural principles valid on this class. We conclude by arguing that the present framework compares favorably to the rival probabilistic accounts of belief developed by Leitgeb (2014, 2017) and Lin and Kelly (2012).
Knowledge from Probability (with Jeremy Goodman). In Joseph Halpern and Andrés Perea (eds) Proceedings of the Eighteenth Conference on Theoretical Aspects of Rationality and Knowledge (TARK 2021), Electronic Proceedings in Theoretical Computer Science 335: 171-186, 2021.
We present a probabilistic theory of knowledge and belief, building on two recent strands of research. The first maintains that knowledge and (rational) belief can be understood in terms of the comparative normality of different possibilities. The second maintains that what a person knows or believes is always relative to a question. Our guiding observation is that, relative to a question, there are natural ways of defining comparative normality in terms of evidential probability. We develop this basic idea and explore its distinctive predictions concerning the contours and dynamics of inductive knowledge and belief. We show that it allows us to model much of the inductive knowledge we ordinarily take ourselves to have, about the future, about scientific laws, and about the values of measures quantities. And it does so without violating the plausible principle that we can only know or (rationally) believe propositions with a high enough probability of being true.
Review of Probabilistic Knowledge by Sarah Moss. Mind 129: 999-1008, 2020.
[Penultimate Draft] [Published Version]Review of Contextualizing Knowledge by Jonathan Jenkins Ichikawa. Notre Dame Philosophical Reviews, 2018.
[Open Access Published Version]Work in Progress
Is a Little Learning Dangerous?
I argue that a little learning is often dangerous even for ideal reasoners who are operating in extremely simple scenarios and know all the relevant facts about how the evidence is generated. More precisely, I show that, on many plausible ways of assigning value to a credence in H, ideal Bayesians should sometimes expect other ideal Bayesians to end up with a worse credence if they gather additional evidence, even when they agree completely about the likelihoods of the evidence given both H and not-H. This provides a new reason for pessimism about the prospect of disagreeing individuals resolving their disagreement through consulting additional evidence.
[Email me for Draft]Gnosticism and Inexact Knowledge
Consequentialist versions of gnosticism, which hold that a belief is rational just in case it is sufficiently likely to constitute knowledge, have many attractions. However, I show that such views make very implausible dynamic predictions when combined with the most influential accounts of the kind of 'inexact knowledge' we have in cases involving imprecise measurement or objective chanciness. These accounts are built on the idea that such knowledge requires a margin for error; and I show that the problems are largely alleviated if we reject this idea. I leave open whether the correct response is to reject gnosticism, or the idea that inexact knowledge requires a margin for error.
[Email me for Draft]I defend the possibility of rational Spoilage: cases where someone initially knows that p, later continues to have a justified belief that p stored in their memory, but does not at this later point know that p. I show how such cases can be modelled on a normality-theoretic account of knowledge and justification, and highlight the problem they raise for existing error theories about intuitions supporting knowledge defeat.
[Email me for Draft]Telling Time with a Broken Clock (with Jeremy Goodman)
A paper on normality and Gettier cases.
[Email me for Draft]