Kevin Dorst
  • Bio
  • Research
  • Resources
  • Contact

Academic Publications

Evidence: A Guide for the Uncertain. (Forthcoming). Philosophy and Phenomenological Research.  [Link]
[Click for Abstract]

Assume that it is your evidence that determines what opinions you should have. I argue that since you should take peer disagreement seriously, evidence must have two features. (1) It must sometimes warrant being modest: uncertain what your evidence warrants, and (thus) uncertain whether you’re rational. (2) But it must always warrant being guided: disposed to treat your evidence as a guide. It is surprisingly difficult to vindicate these dual constraints. But diagnosing why this is so leads to a proposal—Trust—that is weak enough to allow modesty but strong enough to yield many guiding features. In fact, I claim that Trust is the Goldilocks principle—for it is necessary and sufficient to vindicate the claim that you should always prefer to use free evidence. Upshot: Trust lays the foundations for a theory of disagreement and, more generally, an epistemology that permits self-doubt—a modest epistemology. ​

​Abominable KK Failures. (2019). Mind 128 (512): 1227–1259.​​ [Link]
[Click for Abstract]

KK is the thesis that if you can know p, you can know that you can know p. Though it’s unpopular, a flurry of considerations have recently emerged in its favor. Here we add fuel to the fire: standard resources allow us to show that any failure of KK will lead to the knowability and assertability of abominable indicative conditionals of the form, ‘If I don’t know it, p.’ Such conditionals are manifestly not assertable—a fact that KK defenders can easily explain. I survey a variety of KK-denying responses and find them wanting. Those who object to the knowability of such conditionals must either (i) deny the possibility of harmony between knowledge and belief, or (ii) deny well-supported connections between conditional and unconditional attitudes. Meanwhile, those who grant knowability owe us an explanation of such conditionals’ unassertability—yet no successful explanations are on offer. Upshot: we have new evidence for KK.

Lockeans Maximize Expected Accuracy. (2019). Mind  128 (509): 175–211.  [Link]
[Click for Abstract]

The Lockean Thesis says that you must believe p iff you’re sufficiently confident of it. On some versions, the 'must’ asserts a metaphysical connection; on others, it asserts a normative one. On some versions, ‘sufficiently confident’ refers to a fixed threshold of credence; on others, it varies with proposition and context. Claim: the Lockean Thesis follows from epistemic utility theory—the view that rational requirements are constrained by the norm to promote accuracy. Different versions of this theory generate different versions of Lockeanism; moreover, a plausible version of epistemic utility theory meshes with natural language considerations, yielding a new Lockean picture that helps to model and explain the role of beliefs in inquiry and conversation. Your beliefs are your best guesses in response to the epistemic priorities of your context. Upshot: we have a new approach to the epistemology and semantics of belief. And it has teeth. It implies that the role of beliefs is fundamentally different from what many have thought, and in fact supports a metaphysical reduction of belief to credence.

Higher-Order Uncertainty. (2019). In M. Skipper Rasmussen & A. Steglich-Peterson (eds.), Higher-Order Evidence: New Essays. Oxford University Press, 35–61. [Link]
[Click for Abstract]

You have higher-order uncertainty iff you are uncertain of what opinions you should have. I defend three claims about it. First, the higher-order evidence debate can be helpfully reframed in terms of higher-order uncertainty. The central question becomes how your first- and higher-order opinions should relate—a precise question that can be embedded within a general, tractable framework. Second, this question is nontrivial. Rational higher-order uncertainty is pervasive, and lies at the foundations of the epistemology of disagreement. Third, the answer is not obvious. The Enkratic Intuition---that your first-order opinions must “line up” with your higher-order opinions---is incorrect; epistemic akrasia can be rational. If all this is right, then it leaves us without answers---but with a clear picture of the question, and a fruitful strategy for pursuing it.

Can the Knowledge Norm Co-Opt the Opt-Out?  (2014). Thought: A Journal of Philosophy 3 (4): 273 – 282.  [Link]                                                                                                                                                                 
[Click for Abstract]

The Knowledge Norm of Assertion (KNA) claims that it is proper to assert that p only if one knows that p. Though supported by a wide range of evidence, it appears to generate incorrect verdicts when applied to utterances of “I don’t know.” Instead of being an objection to KNA, I argue that this linguistic data shows that “I don’t know” does not standardly function as a literal assertion about one’s epistemic status; rather, it is an indirect speech act that has the primary illocutionary force of opting out of the speaker’s conversational responsibilities. This explanation both reveals that the opt-out is an under-appreciated type of illocutionary act with a wide range of applications, and shows that the initial data in fact supports KNA over its rivals.

Public Philosophy

Why Rational People Polarize. (2019). The Phenomenal World, January 24.  [Link]

Dissertation: Modest Epistemology

Thinking properly is hard. Sometimes I mess it up. I definitely messed it up yesterday. I’ll likely mess it up tomorrow. Maybe I’m messing it up right now.

I’m guessing you’re like me. If so, then we’re both modest: we’re unsure whether we’re thinking rationally. And, it seems, we should be: given our knowledge of our own limitations, it’s rational for us to be unsure whether we’re thinking rationally. How, then, should we think? How does uncertainty about what it’s rational to think affect what it’s rational to think? And how do our judgments of people’s (ir)rationality change once we realize that it can be rational to be modest? My dissertation makes a start on answering those questions.

Chapter 1 ("Higher-Order Uncertainty") introduces a general framework for modeling situations in which you are rational to be unsure what the rational opinions are. I first show how this framework allows us to precisely formulate the questions from the higher-order evidence literature. I then use it to argue that rational modesty is needed to explain the epistemic force of peer disagreement, and therefore that any theory of such disagreement must be based on a general theory of rational modesty. Many have suggested that such a theory can be easily formulated based on the enkratic intuition that your first-order opinions must “line up” with your higher-order ones. But I argue that this is incorrect: whenever modesty is rational, so too is epistemic akrasia. We need to look elsewhere for a general theory of rational modesty.

Chapter 2 ("Evidence: A Guide for the Uncertain") offers one. I build a theory that—in a precise sense—allows as much modesty as possible while still guaranteeing that rationality is a guide. The key principle—which I call Trust—formalizes the truism that it’s likely that what the evidence supports is true. I show that Trust permits modesty, ensures that rational opinions are correlated with truth, and is necessary and (in a wide class of scenarios) sufficient to vindicate the truism that you should always prefer to respond rationally to your evidence. In sum, Trust establishes that there is a principled way for rational people to be modest.

Chapter 3 ("Rational Polarization") applies this theory of rational modesty to the psychology of human reasoning. In particular, a wide range of studies suggest that people have a tendency to predictably polarize in the face of conflicting evidence: to gather and interpret evidence in a way that leads them to predictably strengthen their prior beliefs. This “confirmation bias” is standardly taken to be a hallmark of human irrationality. It need not be. I first prove that whenever modesty can be rational, so too can predictable polarization. I then argue, further, that this abstract possibility may play a role in the actual polarization we observe. In particular, given common structures of rational modesty generated by the process of cognitive search, rational agents who care only about the truth should sometimes exhibit confirmation bias.

Work in Progress                                                                          

"Overconfidence" is Rational
[Click for Abstract]

Do people tend to be overconfident in their opinions? Psychologists think so. They have run calibration studies in which they ask subjects a variety of questions, and then compare their confidence in their answers to the proportion that were true. Under certain conditions, an "overconfidence effect" is robust: for example, of the answers people are 80% confident in, only 60% are true. Psychologists have inferred that people are irrationally overconfident—that they are more confident than they should be, given their evidence. My question is when and why this inference is warranted. Although it is not deductively valid, the inference is in principle often warranted: whenever you should defer to the (independent) opinions it would be rational for a person to have, miscalibration is good evidence of irrationality. However, in practice the inference is problematic—for when you should not defer, the "overconfidence effect" can in fact be evidence that the person is rational. I argue that attention to these epistemological details reveals that the empirical evidence from calibration studies in fact fits well with the hypothesis that people’s degrees of confidence tend to be rational.

Rational Polarization
[Click for Abstract]

Groups of people are disposed to divide into subgroups that predictably polarize on a variety of topics: individuals in the same subgroup tend to converge in opinions, while individuals in different subgroups tend to diverge in opinions. This widely-confirmed empirical tendency is standardly taken to be a hallmark of human irrationality. It need not be. I’ll first show that rational, predictable polarization is possible: whenever you face ambiguous evidence—evidence that you should be unsure how to react to—predictable polarization can be fully epistemically rational. This claim can be proven in a general Bayesian framework, as well as illustrated with a simple demonstration. I’ll then argue, further, that this abstract possibility may play a role in the actual polarization we observe. One core contributor to predictable polarization is confirmation bias: roughly, the tendency for people to seek and interpret evidence in a way that is partial to their prior beliefs. And I’ll argue that—given common structures of evidential ambiguity—rational agents who care only about the truth should sometimes exhibit confirmation bias.

Evidence of Evidence: A Higher-Order Approach (with Branden Fitelson and Brooke Husic)
[Click for Abstract]

"Evidence of evidence is evidence" (EEE) is a slogan that has stirred much recent debate in epistemology. The intuitive idea seems straightforward: if you have reason to think that there is evidence supporting p, then---since what's supported by evidence is likely to be true---you thereby have (some) reason to think that p. However, formulating precise, nontrivial versions of this thesis has proven difficult. In this paper we propose to do so using a higher-order approach---a framework that lets us model (higher-order) opinions about what opinions you should have, i.e. opinions about what opinions your evidence warrants. This framework allows us to formulate propositions about your evidence as objects of uncertainty, and therefore to formulate principles connecting evidence about evidence for p to evidence about p. Drawing on a general theory of rational higher-order uncertainty developed elsewhere, we examine which versions of EEE principles are tenable---showing that although many are not, several strong, intuitive ones are. If these details are correct, then it has (broadly conciliationist) implications for the peer disagreement debate that started the EEE discussion. And regardless of the details, we hope to show that a higher-order approach is fruitful for formulating and testing precise versions of the "evidence of evidence is evidence" slogan.

  • Bio
  • Research
  • Resources
  • Contact