KEVIN DORST
  • Bio
  • Research
  • Teaching
  • General Audience
  • Substack
Picture
Less serious photo

Work in Progress                                                                         

Bayesians Commit the Gambler's Fallacy . (Mathematica Notebook)

  
The gambler’s fallacy is the tendency to expect random processes to switch more often than they actually do—for example, to think that after a string of tails, a heads is more likely. It’s often taken to be evidence for irrationality. It isn’t. Rather, it’s to be expected from a group of Bayesians who begin with causal uncertainty, and then observe unbiased data from an (in fact) statistically independent process. Although they converge toward the truth, they do so in an asymmetric way—ruling out “streaky” hypotheses more quickly than “switchy” ones. As a result, the majority (and the average) exhibit the gambler’s fallacy. If they have limited memory, this tendency persists even with arbitrarily-large amounts of data. Indeed, such Bayesians exhibit a variety of the empirical trends found in studies of the gambler’s fallacy: they expect switches after short streaks but continuations after long ones; these nonlinear expectations vary with their familiarity with the causal system; their predictions depend on the sequence they’ve just seen; they produce sequences that are too switchy; and they exhibit greater rates of gambler’s reasoning when forming binary predictions than when forming probability estimates. In short: what’s been thought to be evidence for irrationality may instead be rational responses to limited data and memory.

Reflection, Introspection, and Book. (With Kevin Zollman)

  
The much-debated Reflection principle states that a coherent agent’s credences must match their estimates for their future credences. Defenders claim that there are Dutch-book arguments in its favor, putting it on the same normative footing as probabilistic coherence. Critics claim that those arguments rely on the implicit, implausible assumption that the agent is introspective: that they are certain what their own credences are. In this paper, we clarify this debate by surveying several different conceptions of the book scenario. We show that the crucial disagreement hinges on whether agents who are not introspective are known to reliably act on their credences: if they are, then coherent Reflection failures are (at best) ephemeral; if they aren’t, then Reflection failures can be robust— and perhaps rational and coherent. We argue that the crucial question for future debates is which notion of coherence makes sense for such unreliable agents, and sketch a few avenues to explore.

Academic Publications

The Conjunction Fallacy: Confirmation or Relevance? (with WooJin Chung, Matthew Mandelkern, and Salvador Mascarenhas)

  
The conjunction fallacy is the well-documented reasoning error on which people rate a conjunction A&B as more probable than one of its conjuncts, A. Many explanations appeal to the fact that B has a high probability in the given scenarios, but Katya Tentori and collaborators have challenged such approaches. They report experiments suggesting that degree of confirmation—rather than probability—is the central determinant of the conjunction fallacy. In this paper, we have two goals. First, we address a confound in Tentori et al.’s experiments: they failed to control for the fact that in their stimuli where B is confirmed, it is also conversationally relevant in the sense that it fits with the topic or question under discussion. Conversely, when B has a high probability but is not confirmed, it is conversationally irrelevant. Consequently, it is possible that conversational relevance, rather than confirmation, is responsible for the differences they found between confirmed and probable hypotheses. Second, inspired by recent theoretical work, we aim to give the first empirical investigation of the hypothesis that this type of conversational relevance on its own—independently of degree of confirmation—can be an important factor in the conjunction fallacy. We report on two experiments that vary Tentori et al.’s design by making B relevant without changing its degree of probability or confirmation. We found that doing so increases the rate of the conjunction fallacy, suggesting that relevance plays an important role in the conjunction fallacy.

Rational Polarization. 2023. The Philosophical Review 132 (3): 355-458
(Abridged version; Twitter thread; Mathematica notebook; Presentation; Explainer video )
Selected for the 2023 Philosopher's Annual

  
Predictable polarization is everywhere: we can often predict how people’s opinions—including our own—will shift over time. Extant theories either neglect the fact that we can predict our own polarization, or explain it through irrational mechanisms. They needn’t. Empirical studies suggest that polarization is predictable when evidence is ambiguous, i.e. when the rational response is not obvious. I show how Bayesians should model such ambiguity, and then prove that—assuming rational updates are those which obey the value of evidence (Blackwell 1953; Good 1967)—ambiguity is necessary and sufficient for the rationality of predictable polarization. The main theoretical result is that there can be a series of such updates, each of which is individually expected to make you more accurate, but which together will predictably polarize you. Polarization results from asymmetric increases in accuracy. This mechanism is not only theoretically possible, but empirically plausible. I argue that cognitive search—searching a cognitively-accessible space for a particular item—often yields asymmetrically ambiguous evidence; I present an experiment supporting its polarizing effects; and I use simulations to show how it can explain two of the core causes of polarization: confirmation bias and the group polarization effect.

Being Rational and Being Wrong. 2023. Philosophers' Imprint 23 (3): 1-25
(Twitter thread)

  
Do people tend to be overconfident? Many think so. They’ve run studies on whether people are calibrated: whether their average confidence in their opinions matches the proportion of those opinions that are true. Under certain conditions, people are systematically ‘over-calibrated’---for example, of the opinions they’re 80% confident in, only 60% are true. From this empirical over-calibration, it’s inferred that people are irrationally overconfident. My question: When and why is this inference warranted? Answering it requires articulating a general connection between being rational and being right---something extant studies have not done. I show how to do so using the notion of deference. This provides a theoretical foundation for calibration research, but also reveals a flaw: the connection between being rational and being right is much weaker than is standardly assumed, since rational people can often be expected to be miscalibrated. Thus we can’t test whether people are overconfident by simply testing whether they are over-calibrated; instead, we must try to predict the rational deviations from calibration, and then compare those predictions to people’s performance. I show how this can be done, and that doing so complicates the interpretation of robust empirical effects.

Splitting the (In)Difference: Why Fine-Tuning Supports Design (with Chris Dorst). 2022. Thought 11 (1): 14-23.

  
Given the laws of our universe, the initial conditions and cosmological constants had to be "fine-tuned" to result in life. Is this evidence for design? We argue that we should be uncertain whether an ideal agent would take it to be so—but that given such uncertainty, we should react to fine-tuning by boosting our confidence in design. The degree to which we should do so depends on our credences in controversial metaphysical issues.

(Almost) All Evidence is Higher-Order Evidence (with Brian Hedden). 2022. Analysis 82 (3): 417-425.

  
Higher-order evidence is evidence about whether you’ve rationally responded to your evidence. Many have argued that it’s special—falling into its own evidential category, or leading to deviations from standard rational norms. But it’s not. Given standard assumptions, almost all evidence is higher-order evidence.

Assertion is Weak (with Matt Mandelkern). 2022. Philosophers' Imprint 22 (19): 1-20.

  
Recent work has argued that belief is weak: the level of rational credence required for belief is relatively low. That literature has contrasted belief with assertion, arguing that the latter requires an epistemic state much stronger than (weak) belief—perhaps knowledge or even certainty. We argue that this is wrong: assertion is just as weak as belief. We first present a variety of new arguments for this claim, and then show that the standard arguments for stronger norms are not convincing. Finally, we sketch an alternative picture on which the fundamental norm of assertion is to say what you believe, but both belief and assertion are weak. To help make sense of this, we propose that both belief and assertion involve navigating a tradeoff between accuracy and informativity: it can makes sense to believe or say something you only have weak evidence for, so long as it’s sufficiently informative.

Good Guesses (with Matt Mandelkern). 2022. Philosophy and Phenomenological Research 105 (3): 581-618

  
This paper is about guessing: how people respond to a question when they aren’t certain of the answer. Guesses show surprising and systematic patterns that the most obvious theories don’t explain. We argue that these patterns reveal that people aim to optimize a tradeoff between accuracy and informativity when forming their guess. After spelling out our theory, we use it to argue that guessing plays a central role in our cognitive lives. In particular, our account of guessing yields new theories of belief, assertion, and the conjunction fallacy—the psychological finding that people sometimes rank a conjunction as more probable than one of its conjuncts. More generally, we suggest that guessing helps explain how boundedly rational agents like us navigate a complex, uncertain world.

Be modest: you're living on the edge. 2022. Analysis. 81 (4): 611–621.

  
Many have claimed that whenever an investigation might provide evidence for a claim, it might also provide evidence against it. Similarly, many have claimed that your credence should never be on the edge of the range of credences that you think might be rational. Surprisingly, both of these principles imply that you cannot rationally be modest: you cannot be uncertain what the rational opinions are.

Deference Done Better (with Ben Levinstein, Bernhard Salow, Brooke E. Husic, and Branden Fitelson ). 2021. Philosophical Perspectives. 35 (1): 99–150. [Mathematica notebook]

  
There are many things—call them ‘experts’—that you should defer to in forming your opinions. The trouble is, many experts are modest: they’re less than certain that they are worthy of deference. When this happens, the standard theories of deference break down: the most popular (“Reflection”-style) principles collapse to inconsistency, while their most popular (“New-Reflection”-style) variants allow you to defer to someone while regarding them as an anti-expert. We propose a middle way: deferring to someone involves preferring to make any decision using their opinions instead of your own. In a slogan, deferring opinions is deferring decisions. Generalizing the proposal of Dorst (2020a), we first formulate a new principle that shows exactly how your opinions must relate to an expert’s for this to be so. We then build off the results of Levinstein (2019) and Campbell-Moore (2020) to show that this principle is also equivalent to the constraint that you must always expect the expert’s estimates to be more accurate than your own. Finally, we characterize the conditions an expert’s opinions must meet to be worthy of deference in this sense, showing how they sit naturally between the too-strong constraints of Reflection and the too-weak constraints of New Reflection

Evidence: A Guide for the Uncertain. 2020. Philosophy and Phenomenological Research. 100 (3): 586–632.

  
Assume that it is your evidence that determines what opinions you should have. I argue that since you should take peer disagreement seriously, evidence must have two features. (1) It must sometimes warrant being modest: uncertain what your evidence warrants, and (thus) uncertain whether you’re rational. (2) But it must always warrant being guided: disposed to treat your evidence as a guide. It is surprisingly difficult to vindicate these dual constraints. But diagnosing why this is so leads to a proposal—Trust—that is weak enough to allow modesty but strong enough to yield many guiding features. In fact, I claim that Trust is the Goldilocks principle—for it is necessary and sufficient to vindicate the claim that you should always prefer to use free evidence. Upshot: Trust lays the foundations for a theory of disagreement and, more generally, an epistemology that permits self-doubt—a modest epistemology.

Abominable KK Failures. 2019. Mind. 128 (512): 1227–1259

  
KK is the thesis that if you can know p, you can know that you can know p. Though it’s unpopular, a flurry of considerations have recently emerged in its favor. Here we add fuel to the fire: standard resources allow us to show that any failure of KK will lead to the knowability and assertability of abominable indicative conditionals of the form, ‘If I don’t know it, p.’ Such conditionals are manifestly not assertable—a fact that KK defenders can easily explain. I survey a variety of KK-denying responses and find them wanting. Those who object to the knowability of such conditionals must either (i) deny the possibility of harmony between knowledge and belief, or (ii) deny well-supported connections between conditional and unconditional attitudes. Meanwhile, those who grant knowability owe us an explanation of such conditionals’ unassertability—yet no successful explanations are on offer. Upshot: we have new evidence for KK.

Lockeans Maximize Expected Accuracy. 2019. Mind. 128 (509): 175–211

  
The Lockean Thesis says that you must believe p iff you’re sufficiently confident of it. On some versions, the 'must’ asserts a metaphysical connection; on others, it asserts a normative one. On some versions, ‘sufficiently confident’ refers to a fixed threshold of credence; on others, it varies with proposition and context. Claim: the Lockean Thesis follows from epistemic utility theory—the view that rational requirements are constrained by the norm to promote accuracy. Different versions of this theory generate different versions of Lockeanism; moreover, a plausible version of epistemic utility theory meshes with natural language considerations, yielding a new Lockean picture that helps to model and explain the role of beliefs in inquiry and conversation. Your beliefs are your best guesses in response to the epistemic priorities of your context. Upshot: we have a new approach to the epistemology and semantics of belief. And it has teeth. It implies that the role of beliefs is fundamentally different from what many have thought, and in fact supports a metaphysical reduction of belief to credence.

Higher-Order Uncertainty. 2019. In M. Skipper & A. Steglich-Petersen (eds.), Higher-Order Evidence: New Essays. Oxford University Press, 35–61.

  
You have higher-order uncertainty iff you are uncertain of what opinions you should have. I defend three claims about it. First, the higher-order evidence debate can be helpfully reframed in terms of higher-order uncertainty. The central question becomes how your first- and higher-order opinions should relate—a precise question that can be embedded within a general, tractable framework. Second, this question is nontrivial. Rational higher-order uncertainty is pervasive, and lies at the foundations of the epistemology of disagreement. Third, the answer is not obvious. The Enkratic Intuition---that your first-order opinions must “line up” with your higher-order opinions---is incorrect; epistemic akrasia can be rational. If all this is right, then it leaves us without answers---but with a clear picture of the question, and a fruitful strategy for pursuing it.

Can the Knowledge Norm Co-Opt the Opt-Out? 2014. Thought: A Journal of Philosophy 3 (4): 273–282.

  
The Knowledge Norm of Assertion (KNA) claims that it is proper to assert that p only if one knows that p. Though supported by a wide range of evidence, it appears to generate incorrect verdicts when applied to utterances of “I don’t know.” Instead of being an objection to KNA, I argue that this linguistic data shows that “I don’t know” does not standardly function as a literal assertion about one’s epistemic status; rather, it is an indirect speech act that has the primary illocutionary force of opting out of the speaker’s conversational responsibilities. This explanation both reveals that the opt-out is an under-appreciated type of illocutionary act with a wide range of applications, and shows that the initial data in fact supports KNA over its rivals.

Handbook Articles and Reviews

Higher-Order Evidence. Forthcoming. In Maria Lasonen-Aarnio and Clayton Littlejohn (eds.), The Routledge Handbook for the Philosophy of Evidence. Routledge.

  
On at least one of its uses, ‘higher-order evidence’ refers to evidence about what opinions are rationalized by your evidence. This chapter surveys the foundational epistemological questions raised by such evidence, the methods that have proven useful for answering them, and the potential consequences and applications of such answers

Review of Epistemic Consequentialism , by Kristoffer Ahlstrom-Vij and Jeffrey Dunn (eds.). 2020. Philosophical Review 129 (3): 484-489.

Dissertation: Modest Epistemology

Thinking properly is hard. Sometimes I mess it up. I definitely messed it up yesterday. I’ll likely mess it up tomorrow. Maybe I’m messing it up right now.

I’m guessing you’re like me. If so, then we’re both modest: we’re unsure whether we’re thinking rationally. And, it seems, we should be: given our knowledge of our own limitations, it’s rational for us to be unsure whether we’re thinking rationally. How, then, should we think? How does uncertainty about what it’s rational to think affect what it’s rational to think? And how do our judgments of people’s (ir)rationality change once we realize that it can be rational to be modest? My dissertation makes a start on answering those questions.
  • Bio
  • Research
  • Teaching
  • General Audience
  • Substack