(This is a guest post by Brian Hedden. 2400 words; 10 minute read.) It’s now part of conventional wisdom that people are irrational in systematic and predictable ways. Research purporting to demonstrate this has resulted in at least 2 Nobel Prizes and a number of best-selling books. It’s also revolutionized economics and the law, with potentially significant implications for public policy. Recently, some scholars have begun pushing back against this dominant irrationalist narrative. Much of the pushback has come from philosophers, and it has come by way of questioning the normative models of rationality assumed by the irrationalist economists and psychologists. Tom Kelly has argued that sometimes, preferences that appear to constitute committing the sunk cost fallacy should perhaps really be regarded as perfectly rational preferences concerning the narrative arc of one’s life and projects. Jacob Nebel has argued that status quo bias can sometimes amount to a perfectly justifiable conservatism about value. And Kevin Dorst has argued that polarization and the overconfidence effect might be perfectly rational responses to ambiguous evidence. In this post, I’ll explain my own work pushing back against the conclusion that humans are predictably irrational in virtue of displaying so-called hindsight bias. Hindsight bias is the phenomenon whereby knowing that some event actually occurred leads you to give a higher estimate of the degree to which that event’s occurrence was supported by the evidence available beforehand. I argue that not only is hindsight bias often not irrational; sometimes it’s even rationally required, and so failure to display hindsight bias would be irrational. Of course, the fact that hindsight bias is compatible with––and sometimes even entailed by––models of ideal rationality doesn’t mean that we humans are rational when we display hindsight bias. We might go too far, or base our hindsight bias on bad reasons. And so I’ll close by considering how we might attempt to test whether, when actual humans display hindsight bias, or other alleged biases for that matter, we are doing so in the rational way I identify or are instead doing so in some other irrational way.
Hindsight Bias Consider a medical case. The doctor had evidence consisting of X-rays and blood samples, and on that basis had to reach a conclusion about how likely it was that the patient had a tumour. Hindsight bias means that you’ll estimate that very evidence, the evidence that was available ex ante, as more strongly supporting the existence of a tumour if you know that the tumour’s existence was later confirmed than if you don’t. Or consider the case of an accident. The railroad corporation had evidence concerning the safety of its tracks, including a variety of models and records of track inspections. Hindsight bias means that you’ll estimate that evidence as supporting a higher probability of train derailment if you know that a train in fact derailed than if you don’t. It’s tempting to think that the judgments you give when you have the benefit of hindsight are irrational and tend to overestimate the degree to which the ex ante evidence supported the relevant hypothesis – that the patient had a tumour, or that a train would derail. The idea is that it’s somehow unfair to take into account evidence that wasn’t available ex ante (namely, evidence that the hypothesis is in fact true) and have it influence your judgment about the upshot of the ex ante evidence. Hindsight is 20/20, but it’s irrational to thereby think that foresight should have been 20/20 too. After all, evidence can be misleading or ambiguous, so why not just think that that’s what happened here? That is, why not think that the ex ante evidence didn’t strongly support the diagnosis of a tumour or the prediction of a train derailment, even though that diagnosis and prediction would have in fact been correct? As these cases suggest, hindsight bias is of particular importance in the context of negligence lawsuits. In such cases, factfinders need to determine whether the defendants (the doctor or the railroad corporation) took reasonable steps in light of the evidence they had available. If hindsight bias leads those factfinders to overestimate the degree to which the ex ante evidence suggested a need for action (action which wasn’t taken by the defendants), then it will likewise lead them to tend to judge the defendants negligent when in fact they weren’t. Lower-Order Evidence Above I gave a quick-and-dirty argument for thinking that hindsight bias is irrational. Suppose that, before learning whether H was true, you rated the ex ante evidence as not very strongly supporting H. (In jargon, your expectation of the degree to which that evidence supported H was low.) Then you learn that H is true. Well, evidence can be misleading, and so the fact that H is in fact true still leaves open the possibility that the ex ante evidence didn’t strongly support H. So perhaps you shouldn’t change your view about how strongly the ex ante evidence supported H and should instead just conclude that flukes happen. This is a bad argument. We can see that it’s a bad argument by considering an analogous case involving coin flips (epistemologists love coin flips!). Suppose that, before tossing a mystery coin of unknown bias, you think it’s probably biased towards tails, though you also think there’s some chance that it’s biased towards heads. (In jargon, your expectation of the coin’s bias towards heads is below 0.5.) Then you toss it and see it land heads. Should you change your expectation of the bias of the coin’s bias towards heads? Reasoning analogous to the above would say ‘no.’ After all, even if the coin is biased towards tails, that still leaves open the possibility that the coin would land heads; so instead of raising your expectation of the coin’s bias towards heads, you should still to your original view and just conclude that this toss was a fluke. But that’s clearly wrong! Any Bayesian will tell you that upon seeing the coin land heads, you should increase your expectation of the coin’s bias towards heads. Here’s why. The conditional probability of heads given that the coin is biased towards heads is higher than the unconditional probability of heads (that is, the probability of heads given your antecedent views about the coin): P(H|Heads-Biased)>P(H). And positive probabilistic relevance is symmetric, meaning that from the above inequality it follows that P(Heads-Biased|H)>P(Heads-Biased). Similarly, the conditional probability of heads given that the coin is biased towards tails is lower than the unconditional probability of heads: P(H|Tails-Biased)<P(H). It follows by symmetry that P(Tails-Biased|H)<P(Tails-Biased). So upon learning H (i.e. upon seeing the coin land heads), you should be more confident that the coin is biased towards heads and less confident that it is biased towards tails. The analogy is clear: Hypotheses about the bias of the coin are like hypotheses about ex ante evidential support. And seeing how the coin landed is like gaining the benefit of hindsight. Just as seeing the coin land heads should increase your expectation of the coin’s bias towards heads, so learning that some other hypothesis is true should increase your expectation of the degree to which the ex ante evidence supported that hypothesis. Now, you might think that the coin flip case is importantly different from the case of hindsight bias. Hypotheses about the bias of the coin are contingent and knowable only a posteriori. But hypotheses about evidential support (or, at least, hypothesis about fundamental evidential support, i.e. about how strongly some body of evidence on its own supports some hypothesis) are necessary and knowable a priori. So while you can’t come to know the bias of a coin just by thinking really hard about the coin, you can come to know the how strongly some body of evidence supports some hypothesis just by thinking really hard about that evidence. Ideally rational agents might be uncertain about the bias of some coin, but they won’t ever be uncertain about how strongly any body of evidence supports any given hypothesis. That’s a natural picture, but it’s now widely rejected by epistemologists. Even if the fundamental evidential support facts are necessary, ideally rational agents can and often should be uncertain about them. So when two people examine the same exact evidence and come to different and conflicting views about some hypothesis, neither should be certain about how strongly that evidence supported the hypothesis, even if one of them happened to have initially judged things correctly. And this higher-order uncertainty about the fundamental evidential support facts isn’t inert; instead, it should impact your views about first-order matters. Becoming more confident that your evidence in fact supports H should in general make you more confident in H itself. (Note, however, that it’s a very tricky issue how exactly this should go.) So if you initially judged that the evidence supported low confidence in H, but then you learn that your peer thought it supported high confidence in H, you should yourself increase your confidence that the evidence supported high confidence in H, and you should increase your confidence in H itself on that basis. On this picture, the case of the coins is in fact analogous to the case of hindsight bias. It’s a picture on which higher-order evidence and higher-order uncertainty (that is, evidence and uncertainty concerning relations of evidence support) have first order consequences, just as evidence and uncertainty about the bias of a coin has consequences for your views about how it will land. Given that, the symmetry of positive probabilistic relevance means that learning those first-order facts has higher order consequences for your views about evidential support, just as learning how the coin lands has consequences for your views about its bias. Higher-order evidence, and how to respond to it, is a hot topic in epistemology these days. I’ve shown that recognising the importance of higher-order evidence, and its obverse, lower-order evidence, allows us to see that so-called hindsight bias is often perfectly rational. And the significance of higher-order evidence may help to cast other alleged biases in a new light, sometimes showing that they too can be perfectly rational. Testing for Irrationality While I claim that hindsight bias can be perfectly rational, I don’t claim that humans are always (or even usually, or often) perfectly rational when they display hindsight bias. Indeed, my own suspicion is that real-world hindsight bias is still often irrational. Humans may go ‘too far’ with their hindsight bias, using hindsight to change their views about the upshot of the ex ante evidence to a greater extent than is warranted by the above analysis. And even when humans display just the right ‘amount’ of hindsight bias, they may do so for bad reasons. Rather than displaying hindsight bias as a rational response to lower-order evidence, they may do so on the basis of evidentially irrelevant motivational factors like the need for closure or the desire to see themselves as experts. Much the same goes for other alleged biases that have been defended by philosophers. Tom Kelly, for example, argues that some instances of the sunk cost fallacy may in fact be based on perfectly rational ‘redemptive preferences’ whereby the agent is concerned with the narrative arc of her life, preferring a life filled with projects seen through to fruition rather than a bunch of false starts. But he doesn’t claim that all instances of this sort of behaviour are rational in this way. Could we run tests to determine the extent to which humans display hindsight bias or sunk cost reasoning irrationally? I am not sure, but I think it would be very difficult and require rather intricate experimental design, not to mention lots of theoretical work to get the normative models right. There are two reasons for my scepticism. First, if we assume that it’s always irrational to display some pattern of behaviour to any extent, then things are easy to test. If we find that people ever display that behaviour, then that’s it – end of story. In the case of hindsight bias, if we assume that your judgments about the upshot of the ex ante evidence shouldn’t differ at all depending on whether you know what wound up happening, then if we detect any such difference, no matter how small (provided it’s still statistically significant), then that suffices to show that you’re irrational. There’s a sharp, empirically detectable distinction between ‘no difference’ and ‘some difference.’ But if we assume that your judgments should differ to some degree depending on whether you know what wound up happening, then we can’t determine whether they differ ‘too much’ unless we know exactly how much they should differ. But I think it might well be impossible to say ‘how much’ hindsight bias you should display in any realistic case. Second, we might try to determine whether humans display hindsight bias irrationally by determining whether they do so on the basis of good reasons or bad. Do they display hindsight bias in virtue of responding to higher- and lower-order evidence, or on the basis of a need for closure and self-esteem? We might try just asking them for their reasons, but we should be very sceptical about people’s capacity to introspectively determine their own motivations. Perhaps some more complex experimental designs could help determine their motivations, or at least ensure that any bias they display couldn’t be on the basis of the good reasons that I identify. There may be no in-principle reason this couldn’t be done, but at the same time it’s certainly no easy task. What next? If you want to hear more about the details, check out Brian's paper, "Hindsight Bias is Not a Bias." If you want to learn more about the empirical work on hindsight bias, see this seminal paper by Baruch Fischoff, as well as this recent overview of the literature. For more on the importance of normative models psychological research, see this paper by Ulrike Hahn and Adam Harris on motivated reasoning.
2 Comments
10/9/2022 02:26:10 pm
Event us beyond capital nor likely. Husband choose body mean allow tree hundred.
Reply
Leave a Reply. |
Kevin DorstPhilosopher at MIT, trying to convince people that their opponents are more reasonable than they think Quick links:
- What this blog is about - Reasonably Polarized series - RP Technical Appendix Follow me on Twitter or join the newsletter for updates. Archives
June 2023
Categories
All
|