Reasonably Polarized will be back next week. In the meantime, here's a guest post on the rationality of framing effects, by Sarah Fisher (University of Reading), based on a forthcoming paper of hers that asks whether the "at least" reading of number terms can yield a rational explanation of framing effects. The paper recently won Crítica's essay competition on the theme of empirically informed philosophy—congrats Sarah! 2300 words; 10 minute read. As we learn to live in the ‘new normal’, amidst the easing and tightening of local and national lockdowns, day-to-day decision-making has become fraught with difficulty. Here are some of the questions I’ve been grappling with lately:
Risky-Choice ProblemsImagine you are offered the following choice: A. Receiving $\$$10 for sure. Option (B) involves risk – if you choose it, you can’t be sure of the outcome. Hence, this is an example of a ‘risky-choice task’. Have a think about which option you’d prefer (and write it down if you’re worried about hindsight bias). Now consider the following choice: C. Losing $\$$10 for sure Which option would you prefer this time? Typically, people prefer option (A) in the first task and option (D) in the second. Why is this puzzling? Well, first notice that options A and B in the first task have the same ‘expected value’ as each other. We can work out the expected value of gaining $\$$10 with a probability of 1 as follows:
Why do people tend to prefer option A then? Presumably, because they prefer to have a sure thing than risk walking away with nothing. In other words, people are risk-averse. But now let’s look at the second task. Again, options C and D have the same expected value. The expected value of losing $\$$10 with a probability of 1 can be worked out as follows:
But in the second task, people tend to prefer option D. That indicates risk-seeking behaviour: other things equal, they prefer to take the risk of losing 20 dollars over the certainty of losing 10 dollars. So, it looks like people flip from being risk-averse when they are faced with gains (as in the first task), to being risk-seeking when they are faced with losses (as in the second task). This ‘reflection effect’ was first brought to light by the psychologists Daniel Kahneman and Amos Tversky. They factor it into their ‘prospect theory’ of decision-making under risk, which is designed to model how people really make decisions (not how they should!). You can read their seminal paper on prospect theory here. The ‘shifty’ nature of our risk attitudes is a fascinating topic in its own right. Why do we prefer sure gains but risky losses? For now, I’m going to put that question to one side because I want to focus on a different one: Can options be made to seem like gains or losses just by framing them in particular ways? (1600 words left) Risky-Choice FramingThe following scenario is adapted from papers by David Mandel, published in 2001 and 2014 (and it is itself inspired by a classic scenario introduced by Tversky and Kahneman here). In a war-torn region, the lives of 600 stranded people are at stake. Two response plans with the following outcomes have been proposed. Assume that the estimates provided are accurate. Now take a look at this slightly different version: In a war-torn region, the lives of 600 stranded people are at stake. Two response plans with the following outcomes have been proposed. Assume that the estimates provided are accurate. In the first task, people tend to prefer the sure option, Plan A. But, in the second task they tend to prefer the risky option, Plan D. Perhaps you did as well. The pattern is reminiscent of our earlier pair of choice tasks. But there’s an important difference: the gains and losses are only apparent now. Plan A and Plan C are supposed to have exactly the same outcome, namely 200 people being saved and 400 people dying. Plan B and Plan D are also supposed to be equivalent: both involve a one-third probability of everyone being saved (i.e. nobody dying) and a two-thirds probability of nobody being saved (i.e. everyone dying). So, it’s not as though the first version of the task involves actual gains while the second one involves actual losses. Even if it’s true that we’re risk-averse for gains and risk-seeking for losses, that can’t completely explain the responses here. Cases like this are especially puzzling. Tversky and Kahneman put forward a solution. They point out that in the first version of the choice task, positive language is used: the options are described in terms of the number of people who will ‘be saved’. This, they suggest, makes options A and B sound like gains, even though some people could die. In contrast, the second version of the choice task uses negative language, talking about the number of people who will ‘die’. This makes options C and D sound like losses, even though some people could be saved. Once we’ve interpreted the options as gains or losses, prospect theory predicts what we’ll do next. On the one hand, since we tend to be risk-averse when we think we’re facing gains, we’ll choose Plan A in the first version of the task. On the other hand, since we tend to be risk-seeking when we think we’re facing losses, we’ll choose Plan D in the second version. And, as we saw, that’s precisely the pattern the psychologists find. As an aside, I wonder whether the British and Irish governments had this effect in mind early on in the COVID-19 outbreak. In a tweet from 13th March, Billy Bragg wryly comments on their contrasting communication strategies. Whereas Johnson was using a negative frame, warning that many people would die, Varadkar chose a positive frame, claiming that many could be saved. If Tversky and Kahneman are right, Varadkar’s positive framing could have encouraged a relatively cautious response, which would be consistent with Ireland’s swift lockdown. Meanwhile, Johnson could have been encouraging Brits to take a risker approach (herd immunity…?). This particular case aside, prospect theory has been extremely influential in academia, industry and popular culture. And ‘risky-choice framing effects’ are commonly thought to involve a double dose of irrationality: first, the superficial differences in language affect how we perceive the options facing us (although see my last blog post for a rationalising explanation of the effects of positive and negative frames); second, perceiving options as gains or losses affects which one we prefer. But Tversky and Kahneman’s account isn’t universally accepted. And one way of challenging it is by questioning the equivalence of the options in each version of a risky-choice task. So, in the above example, is 200 people being saved really the same as 400 people dying? Not necessarily… (800 words left) A ChallengeThis study investigates a different explanation of risky-choice framing effects. The author, David Mandel, suggests that many people interpret number terms like ‘200’ and ‘400’ as having ‘at least’ meanings. In fact, this possibility is well-recognised by linguists and philosophers of language. For instance, when we are instructed to keep two metres apart, this is clearly a minimum social distancing measure – even better if it’s three or four metres. So, saying that ‘200 people will be saved’ may be interpreted as meaning that at least 200 people will be saved. That leaves open the (good!) possibility that more than 200 people could be saved under Plan A (i.e. less than 400 would die). On the flipside, saying that ‘400 people will die’ may be interpreted as meaning that at least 400 people will die. That leaves open the (bad!) possibility that more than 400 could die (i.e. less than 200 would be saved). So, strictly speaking, Plan A is a better prospect than Plan C. Perhaps that could be enough to explain why people prefer Plan A in the first version of the problem and Plan D in the second version. Then we needn’t conclude that they have inconsistent attitudes to risk. Mandel conducted a series of experiments. In summary:
These are really striking results. It looks like risky-choice framing effects could be all down to how people interpret number terms. So…case closed? Well, not quite. Two attempts to confirm the results of one of Mandel’s experiments failed to eliminate framing effects (see here and here, with Mandel’s reply to the first of these here). In both cases, risky-choice framing effects arose even when number terms like ‘200’ and ‘400’ were being understood exactly. It seems unlikely, then, that risky-choice framing effects are entirely explained by ‘at least’ interpretations of number terms (and this is a point which Mandel himself is careful to make – in separate work, like this paper written with Michael Tombu, he has developed another proposal which could explain the remainder of the effect). Still, I think these ‘lower-bounded’ interpretations of number terms are an important contributing factor (and I argue in this draft paper that Mandel’s critics should think so too). That’s worth noting because it could allow us to rationalise at least some of people’s risky-choice behaviour. How does this relate to communications and decision-making during the global pandemic? One useful takeaway is that numbers can be understood in different ways – as ‘exact’ or ‘at least’ (and sometimes ‘at most’, as when locked down Brits were allowed to take ‘one’ outdoor excursion a day). So, when we hear politicians and scientists predicting COVID-19 case numbers or death tolls, it’s worth reflecting on whether these are these are supposed to be point estimates, bare minimums, or upper limits. In practice, the high degree of uncertainty in the current climate often makes it hard to put numbers on outcomes at all. And that may make the ‘lower-bounding hypothesis’ described above less relevant to our ordinary day-to-day risk judgements. (However, government decision-makers are far more likely to be presented with quantified outcomes and probabilities, so the findings may be more applicable at that level). Some important questions we are left with, then, are:
To sum up, I think the jury is still out on whether risky-choice framing effects can be entirely rationalised. Still, we shouldn’t be too quick to conclude that they can’t. And, in the meantime, we can try to notice when framing is affecting us – perhaps even using that knowledge to our advantage. Recently, I’ve found it particularly useful to reframe my choices, to see how that affects my attitudes and preferences. For instance, it may be true that staying home may offer predictable monotony, compared with the chance of greater enjoyment, but it also offers safety over the risk of infection. Perhaps trying out both perspectives can help challenge our own default ways of thinking – and our understanding of each other’s. What next? If you’d like to find out more about the different theories of framing effects, this survey chapter by Karl Halvor Teigen is a good place to start. For more on the philosophical interpretation of the lower-bounding hypothesis, check out Sarah's forthcoming paper on the topic.
0 Comments
Leave a Reply. |
Kevin DorstPhilosopher at MIT, trying to convince people that their opponents are more reasonable than they think Quick links:
- What this blog is about - Reasonably Polarized series - RP Technical Appendix Follow me on Twitter or join the newsletter for updates. Archives
June 2023
Categories
All
|