KEVIN DORST
  • Bio
  • Research
  • Resources
  • CV / Contact
  • Stranger Apologies
Picture
Less serious photo

Work in Progress                                                                         

Rational Polarization (Handout; Presentation)

  
Predictable polarization is everywhere. We can often predict the different directions that people’s opinions—including our own—will shift over time. Empirical studies suggest that this is so whenever evidence is ambiguous—a result that is often thought to demonstrate human bias or irrationality. It doesn’t. Bayesians will predictably polarize iff their evidence is ambiguous. And indeed, ours often is: the process of cognitive search—searching a cognitively-accessible space for an item of a particular profile—yields ambiguous evidence that can predictably polarize beliefs, despite being expected to make them more accurate. In principle, a series of such rational updates could lead to polarization that is predictable, profound, and persistent. Thus it’s theoretically possible that rational mechanisms drive predictable polarization. It’s also empirically plausible. I present a novel experiment confirming the polarizing effect of cognitive search, and then use models and simulations to show how such ambiguous evidence can help explain two of the core causes of polarization: confirmation bias and the group polarization effect.

Being Rational and Being Wrong

  
Do people tend to be overconfident in their opinions? Many think so. They’ve run studies to test whether people are calibrated: whether their confidence in their opinions matches the proportion of those opinions that are true. Under certain conditions, people are systematically “over-calibrated”—for example, of the opinions they’re 80% confident in, only 60% are true. From this observed over-calibration, it’s inferred that people are irrationally overconfident. My question: When—and why—is this inference warranted? Answering this question requires articulating a general connection between being rational and being right—something extant studies have not done. I show how to do so using the notion of deference. This provides a theoretical foundation to calibration research, but also reveals a flaw: the connection between being rational and being right is much weaker than is commonly assumed; as a result, rational people can often be expected to be miscalibrated. Thus we can’t test whether people are overconfident by simply testing whether they are over-calibrated; instead, we must first predict the expected rational deviations from calibration, and then compare those predictions to people’s performance. I show how in principle this can be done—and that doing so has the potential to overturn the standard interpretation of robust empirical effects. In short: rational people can be expected to be wrong more often than you might think.

Fine-Tuning Divine Indifference (with Chris Dorst)

  
Given the laws of our universe, the initial conditions and cosmological constants had to be ``fine-tuned" to result in life. Is this evidence for design? We argue that we should be uncertain whether an ideal agent would take it to be so---but that given such uncertainty, we should react to fine-tuning by boosting our confidence in design. The degree to which we should do so depends on our credences in controversial metaphysical issues.

Academic Publications

Deference Done Better (with Ben Levinstein, Bernhard Salow, Brooke E. Husic, and Branden Fitelson ). Forthcoming. Philosophical Perspectives. [Mathematica notebook]

  
There are many things—call them ‘experts’—that you should defer to in forming your opinions. The trouble is, many experts are modest: they’re less than certain that they are worthy of deference. When this happens, the standard theories of deference break down: the most popular (“Reflection”-style) principles collapse to inconsistency, while their most popular (“New-Reflection”-style) variants allow you to defer to someone while regarding them as an anti-expert. We propose a middle way: deferring to someone involves preferring to make any decision using their opinions instead of your own. In a slogan, deferring opinions is deferring decisions. Generalizing the proposal of Dorst (2020a), we first formulate a new principle that shows exactly how your opinions must relate to an expert’s for this to be so. We then build off the results of Levinstein (2019) and Campbell-Moore (2020) to show that this principle is also equivalent to the constraint that you must always expect the expert’s estimates to be more accurate than your own. Finally, we characterize the conditions an expert’s opinions must meet to be worthy of deference in this sense, showing how they sit naturally between the too-strong constraints of Reflection and the too-weak constraints of New Reflection

(Almost) All Evidence is Higher-Order Evidence (with Brian Hedden). Forthcoming. Analysis.

  
Higher-order evidence is evidence about whether you’ve rationally responded to your evidence. Many have argued that it’s special—falling into its own evidential category, or leading to deviations from standard rational norms. But it’s not. Given standard assumptions, almost all evidence is higher-order evidence.

Assertion is Weak (with Matt Mandelkern). Forthcoming. Philosophers' Imprint.

  
Recent work has argued that belief is weak: the level of rational credence required for belief is relatively low. That literature has contrasted belief with assertion, arguing that the latter requires an epistemic state much stronger than (weak) belief—perhaps knowledge or even certainty. We argue that this is wrong: assertion is just as weak as belief. We first present a variety of new arguments for this claim, and then show that the standard arguments for stronger norms are not convincing. Finally, we sketch an alternative picture on which the fundamental norm of assertion is to say what you believe, but both belief and assertion are weak. To help make sense of this, we propose that both belief and assertion involve navigating a tradeoff between accuracy and informativity: it can makes sense to believe or say something you only have weak evidence for, so long as it’s sufficiently informative.

Good Guesses (with Matt Mandelkern). Forthcoming. Philosophy and Phenomenological Research.

  
This paper is about guessing: how people respond to a question when they aren’t certain of the answer. Guesses show surprising and systematic patterns that the most obvious theories don’t explain. We argue that these patterns reveal that people aim to optimize a tradeoff between accuracy and informativity when forming their guess. After spelling out our theory, we use it to argue that guessing plays a central role in our cognitive lives. In particular, our account of guessing yields new theories of belief, assertion, and the conjunction fallacy—the psychological finding that people sometimes rank a conjunction as more probable than one of its conjuncts. More generally, we suggest that guessing helps explain how boundedly rational agents like us navigate a complex, uncertain world.

Be modest: you're living on the edge. Forthcoming. Analysis.

  
Many have claimed that whenever an investigation might provide evidence for a claim, it might also provide evidence against it. Similarly, many have claimed that your credence should never be on the edge of the range of credences that you think might be rational. Surprisingly, both of these principles imply that you cannot rationally be modest: you cannot be uncertain what the rational opinions are.

Evidence: A Guide for the Uncertain. 2020. Philosophy and Phenomenological Research. 100 (3): 586-632.

  
Assume that it is your evidence that determines what opinions you should have. I argue that since you should take peer disagreement seriously, evidence must have two features. (1) It must sometimes warrant being modest: uncertain what your evidence warrants, and (thus) uncertain whether you’re rational. (2) But it must always warrant being guided: disposed to treat your evidence as a guide. It is surprisingly difficult to vindicate these dual constraints. But diagnosing why this is so leads to a proposal—Trust—that is weak enough to allow modesty but strong enough to yield many guiding features. In fact, I claim that Trust is the Goldilocks principle—for it is necessary and sufficient to vindicate the claim that you should always prefer to use free evidence. Upshot: Trust lays the foundations for a theory of disagreement and, more generally, an epistemology that permits self-doubt—a modest epistemology.

Abominable KK Failures. 2019. Mind. 128 (512): 1227–1259

  
KK is the thesis that if you can know p, you can know that you can know p. Though it’s unpopular, a flurry of considerations have recently emerged in its favor. Here we add fuel to the fire: standard resources allow us to show that any failure of KK will lead to the knowability and assertability of abominable indicative conditionals of the form, ‘If I don’t know it, p.’ Such conditionals are manifestly not assertable—a fact that KK defenders can easily explain. I survey a variety of KK-denying responses and find them wanting. Those who object to the knowability of such conditionals must either (i) deny the possibility of harmony between knowledge and belief, or (ii) deny well-supported connections between conditional and unconditional attitudes. Meanwhile, those who grant knowability owe us an explanation of such conditionals’ unassertability—yet no successful explanations are on offer. Upshot: we have new evidence for KK.

Lockeans Maximize Expected Accuracy. 2019. Mind. 128 (509): 175–211

  
The Lockean Thesis says that you must believe p iff you’re sufficiently confident of it. On some versions, the 'must’ asserts a metaphysical connection; on others, it asserts a normative one. On some versions, ‘sufficiently confident’ refers to a fixed threshold of credence; on others, it varies with proposition and context. Claim: the Lockean Thesis follows from epistemic utility theory—the view that rational requirements are constrained by the norm to promote accuracy. Different versions of this theory generate different versions of Lockeanism; moreover, a plausible version of epistemic utility theory meshes with natural language considerations, yielding a new Lockean picture that helps to model and explain the role of beliefs in inquiry and conversation. Your beliefs are your best guesses in response to the epistemic priorities of your context. Upshot: we have a new approach to the epistemology and semantics of belief. And it has teeth. It implies that the role of beliefs is fundamentally different from what many have thought, and in fact supports a metaphysical reduction of belief to credence.

Higher-Order Uncertainty. 2019. In M. Skipper & A. Steglich-Petersen (eds.), Higher-Order Evidence: New Essays. Oxford University Press, 35–61.

  
You have higher-order uncertainty iff you are uncertain of what opinions you should have. I defend three claims about it. First, the higher-order evidence debate can be helpfully reframed in terms of higher-order uncertainty. The central question becomes how your first- and higher-order opinions should relate—a precise question that can be embedded within a general, tractable framework. Second, this question is nontrivial. Rational higher-order uncertainty is pervasive, and lies at the foundations of the epistemology of disagreement. Third, the answer is not obvious. The Enkratic Intuition---that your first-order opinions must “line up” with your higher-order opinions---is incorrect; epistemic akrasia can be rational. If all this is right, then it leaves us without answers---but with a clear picture of the question, and a fruitful strategy for pursuing it.

Can the Knowledge Norm Co-Opt the Opt-Out? 2014. Thought: A Journal of Philosophy 3 (4): 273–282.

  
The Knowledge Norm of Assertion (KNA) claims that it is proper to assert that p only if one knows that p. Though supported by a wide range of evidence, it appears to generate incorrect verdicts when applied to utterances of “I don’t know.” Instead of being an objection to KNA, I argue that this linguistic data shows that “I don’t know” does not standardly function as a literal assertion about one’s epistemic status; rather, it is an indirect speech act that has the primary illocutionary force of opting out of the speaker’s conversational responsibilities. This explanation both reveals that the opt-out is an under-appreciated type of illocutionary act with a wide range of applications, and shows that the initial data in fact supports KNA over its rivals.

Handbook Articles and Reviews

Higher-Order Evidence. Forthcoming. In Maria Lasonen-Aarnio and Clayton Littlejohn (eds.), The Routledge Handbook for the Philosophy of Evidence. Routledge.

  
On at least one of its uses, ‘higher-order evidence’ refers to evidence about what opinions are rationalized by your evidence. This chapter surveys the foundational epistemological questions raised by such evidence, the methods that have proven useful for answering them, and the potential consequences and applications of such answers

Review of Epistemic Consequentialism , by Kristoffer Ahlstrom-Vij and Jeffrey Dunn (eds.). 2020. Philosophical Review 129 (3): 484-489.

Public Philosophy

Stranger Apologies blog, including the series Reasonably Polarized: Why politics is more rational than you think.

  
The blog explores the extent to which empirical findings from psychology and behavioral economics support an irrationalist picture of human nature. The Reasonably Polarized series focuses on political polarization, arguing that such polarization is due in large part to rational causes.

How America Polarized. Arc Digital . Feb 3, 2021.

  
A short explainer on the trajectory of political polarization in the US over the last half-century.

The Other Side is More Rational Than You Think. Arc Digital . September 25, 2020.

  
I argue that that since (1) we can't think our own political beliefs are rational, and (2) the psychological evidence shows that partisans on both sides form their beliefs through the same mechanisms, we should conclude that both sides are rational in holding their beliefs. We should think the other side is wrong, of course---so we should conclude that it's because they've been misled.

The Rational Question. . The Oxonian Review . March 14, 2020.

  
I argue that a general problem with claimed demonstrations of irrationality is their reliance on standard economic models of rational belief and action.

Why Rational People Polarize. Phenomenal World. January 24, 2019.

  
I argue that several of the psychological tendencies that drive polarization can arise from purely rational mechanisms, due to the fact that some types of evidence are predictably more ambiguous than others.
Podcasts/Interviews:
  • DIE ZEIT: "We must not consider every political opponent to be irrational", March 27, 2022
  • Onkegend: "Why confirmation bias and polarization can be rational", March 9, 2022
  • Parlia: "Why Bias is Rational." September 30, 2020.
  • Embrace the Void: "The Tragedy of the Epistemic Commons." September 4, 2020.

Dissertation: Modest Epistemology

Thinking properly is hard. Sometimes I mess it up. I definitely messed it up yesterday. I’ll likely mess it up tomorrow. Maybe I’m messing it up right now.

I’m guessing you’re like me. If so, then we’re both modest: we’re unsure whether we’re thinking rationally. And, it seems, we should be: given our knowledge of our own limitations, it’s rational for us to be unsure whether we’re thinking rationally. How, then, should we think? How does uncertainty about what it’s rational to think affect what it’s rational to think? And how do our judgments of people’s (ir)rationality change once we realize that it can be rational to be modest? My dissertation makes a start on answering those questions.

Chapter 1 ("Higher-Order Uncertainty") introduces a general framework for modeling situations in which you are rational to be unsure what the rational opinions are. I first show how this framework allows us to precisely formulate the questions from the higher-order evidence literature. I then use it to argue that rational modesty is needed to explain the epistemic force of peer disagreement, and therefore that any theory of such disagreement must be based on a general theory of rational modesty. Many have suggested that such a theory can be easily formulated based on the enkratic intuition that your first-order opinions must “line up” with your higher-order ones. But I argue that this is incorrect: whenever modesty is rational, so too is epistemic akrasia. We need to look elsewhere for a general theory of rational modesty.

Chapter 2 ("Evidence: A Guide for the Uncertain") offers one. I build a theory that—in a precise sense—allows as much modesty as possible while still guaranteeing that rationality is a guide. The key principle—which I call Trust—formalizes the truism that it’s likely that what the evidence supports is true. I show that Trust permits modesty, ensures that rational opinions are correlated with truth, and is necessary and (in a wide class of scenarios) sufficient to vindicate the truism that you should always prefer to respond rationally to your evidence. In sum, Trust establishes that there is a principled way for rational people to be modest.

Chapter 3 ("Rational Polarization") applies this theory of rational modesty to the psychology of human reasoning. In particular, a wide range of studies suggest that people have a tendency to predictably polarize in the face of conflicting evidence: to gather and interpret evidence in a way that leads them to predictably strengthen their prior beliefs. This “confirmation bias” is standardly taken to be a hallmark of human irrationality. It need not be. I first prove that whenever modesty can be rational, so too can predictable polarization. I then argue, further, that this abstract possibility may play a role in the actual polarization we observe. In particular, given common structures of rational modesty generated by the process of cognitive search, rational agents who care only about the truth should sometimes exhibit confirmation bias.
  • Bio
  • Research
  • Resources
  • CV / Contact
  • Stranger Apologies