KEVIN DORST
  • Bio
  • Research
  • Teaching
  • Public Philosophy
  • Stranger Apologies

Stranger Apologies

Belief Perseverance is Rational

4/3/2021

12 Comments

 
(1700 words; 10 minute read.  This post was inspired by a suggestion from Chapter 2 (p. 60) of Jess Whittlestone's dissertation on confirmation bias.)
​

Uh oh.  No family reunion this spring—but still, Uncle Ron managed to get you on the phone for a chat.  It started pleasantly enough, but now the topic has moved to politics.

He says he’s worried that Biden isn’t actually running things as president, and that instead more radical folks are running the show.  He points to some video anomalies (“hand floating over mic”) that led to the theory that many apparent videos of Biden are fake.

You point out that the basis for that theory has been debunked—the relevant anomaly was due to a video compression error, and several other videos taken from the same angle show the same scene.

He reluctantly seems to accept this. The conversation moves on.

But a few days later, you see him posting on social media about how that same video of Biden may have been fake!

Sigh. Didn’t you just correct that mistake? Why do people cling on to their beliefs even after they’ve been debunked?  This is the problem; another instance of people’s irrationality leading to our polarized politics—right?

​Well, it is a problem. But it may be all the thornier because it’s often rational.
Real-world political cases are messy, so let’s turn to the lab.  The phenomenon is what’s known as belief perseverance: after being provided with evidence to induce a particular belief, and then later being told that the evidence was actually bunk, people tend to still maintain the induced belief to some extent (Ross et al. 1975; McFarland et al. 2007).

​A classic example: the debriefing paradigm.  Under the cover story of testing subjects’ powers of social perception, researchers gave them 15 pairs of personal letters, and asked them to identify which was written by a real person and which was written by the experimenters.

Afterwards, they received feedback on how well they did. Unbeknownst to the subjects, that feedback was bogus—they were (at random) told that they got either 14 of 15 notes correct (positive feedback), or 4 of 15 correct (negative feedback). Both groups were told the average person got 9 of 15 correct.

As a result of this feedback, they come to believe that they are above (or below) average on this sort of task.

Some time later, subjects are “debriefed”: told that the feedback that they received was bunk, and had nothing to do with their actual performance. They’re then asked to estimate how well they would do on the task if they were to perform it on a new set of letters.

The belief perseverance effect is that, on average, those who received the positive feedback expected to do better on a new set of letters than those who received the negative feedback—despite the fact that they had both been told that the feedback they received was bunk!

So even after the evidence that induced their belief had been debunked, they still maintained their belief to some degree.

And rationally so, I say.

Imagine that Baya the Bayesian is doing this experiment. She’s considering how well she’d Perform on a future version of the test—trying to predict the value of a variable, P, that ranges from 1–15 (the number she’d get correct).

She doesn’t know how she’d perform, but she can form an estimate of this quantity, E[P]: a weighted average of the various values P might take, with weighs determined by how likely she thinks they are to obtain.

Suppose she starts out with no idea—she’s equally confident that she’d get any score between 1–15:
Picture
Then her estimate for P is just the straight average of these numbers:

E[P]  =  1/15*1 + 1/15*2 + … + 1/15*15  =  8

Suppose now she’s given positive feedback (“You got 14 of 15 correct!”)—label this information “+F”.  Given this feedback, she can form a new estimate, E[P|+F] of how well she’d do on a future test—and since her feedback was positive, that new estimate should be higher than her original estimate of 8.

​Perhaps, given the feedback, she’s now relatively confident she’d get a good score on the test—her probabilities look like this:
Picture
Then her estimate for her performance is roughly 13:

​E[P | +F]  ≈  0.25 *15 + 0.2*14 + 0.15*13 + …    =  13

Now in the final stage, she’s told that the positive feedback she received was bunk.   So the total information she received is “positive feedback (+F), but told that it was bunk”. What should her final estimate of her future performance, E[P | +F, told bunk], be?

Well, how much does she believe the experimenters?  After all, they’ve already admitted they’ve lied to her at least once; so how likely does the fact that they told her the feedback was bunk make it that the feedback in fact was bunk?  Something less than 100%, presumably.

Now, what she was told about whether the feedback was bunk is relevant to her estimate of her future performance only because it is evidence about whether the feedback was actually bunk. Thus information about whether the feedback was bunk should “screen off” what the experimenters told her.  (If a hyper-reliable source told her “The feedback was legit—they were lying to you when they told you it was bunk”—she should go back to estimating that her performance would be quite good, since the legitimate feedback was positive.)

Upshot: so long as Baya doesn’t completely trust the psychologist’s claim that the information was bunk—as she shouldn’t, since they’re in the business of deception—then she should still give some credence to the possibility that the feedback wasn’t bunk.
(800 words left)
Crucial point: if she’s unsure whether the feedback she received was bunk, her final estimate for her future performance should be an average of her two previous estimates.

​Conditional on the feedback actually being bunk, her estimate should revert to it’s initial value of 8:

E[P | +F, told bunk, is bunk] = 8.

Meanwhile, conditional on the feedback actually being not bunk, her estimate should jump up to it’s previous high value of 13:

E[P | +F, told bunk, not bunk] = 13.

When she doesn’t know whether the feedback is bunk or not, she should think to herself, “Maybe it is bunk, in which case I’d expect to get only a 8 on a future test; but maybe it’s not bunk, in which case I’d expect to get a 13 on a future test.”  Thus (by what’s known as the law of total expectation) her overall estimate should be an average of these two possibilities, with weights determined by how likely she thinks they are to obtain.

So, for example, suppose she thinks it’s 80% likely that the psychologists are telling her the truth when they told her the feedback was bunk.  Then her final estimate should be:

E[P | +F, told bunk]  =  0.8*8 + 0.2*13  =  9.

This value is greater, of course, than her original estimate of 8. (The reasoning generalizes; see the Appendix below.)

By exactly parallel reasoning, if Baya were instead given negative feedback (–F) at the beginning, she would instead end up with an estimate slightly lower than her initial value of 8; say, E[P | –F, told bunk]  =  7.

As a result, there should be a difference between the two conditions.  Even after being told the feedback was bunk, both groups should wonder whether it in fact was.  Because of these doubts, those who received positive feedback should adjust their estimates slightly up; those who received negative feedback should adjust them slightly down. Thus, even if our subjects are rational Bayesians, there should be a difference between the groups who received positive vs. negative feedback, even after being told it was bunk:

E[P | +F , told bunk]    >   E[P]   >   E[P | –F, told bunk].

The belief perseverance effect should be expected of rational people.

A prediction of this story is that insofar as the subjects are inclined to trust the experimenters, their estimates of their future performance should be far less affected by the feedback after being told it was bunk than before so.  (The more they trust them, the more weight their initial estimate plays in the final average.)

This is exactly what the studies find. For subjects who are never told that the feedback was bunk, those who received negative feedback estimated their future performance to be 4.46, while those who received positive feedback estimated it to be 13.18.

In contrast, for subjects who were told the feedback was bunk, those who received negative feedback estimated their future performance at 7.96, while those who received positive feedback estimated it at 9.33.

There is a statistically significant difference between these two latter values—that’s the belief perseverance effect—but it is much smaller than the initial divergence.  As the rational story predicts.

Another prediction of this rational story is that insofar as psychologists can get subjects to fully believe that the feedback really was bunk, the belief perseverance effect should disappear.

Again, this is what they find.  Some subjects where given a much more substantial debriefing—explaining that the task itself is not a legitimate measure of anything (no one can reliably identify the real letters).  Such subjects exhibited no belief perseverance at all.

Upshot: in the lab, the belief perseverance effect could well be fully rational.

Okay. But what about Uncle Ron?

Well, obviously real-world cases like this are much more complicated. But it does share some structure with the above lab example.

Ron originally had some some (perhaps quite low) degree of belief that something fishy was going on with Biden. He then saw a video which boosted that level of confidence. Finally, he was then told that the video was bunk.

So long as he doesn’t complete trust the source that debunks the video, it makes sense for him to remain slightly more suspicious than he was originally.  How suspicious, of course, depends on how much he ought to believe that the video really was bunk.

But even if he trusts the debunker quite a bit, a small bump in suspicion will remain. And if he sees a lot of bits of evidence like this, then even if he’s pretty confident that each one is bunk, his suspicions might reasonably start to accumulate.   

If that's right, the fall into conspiracy theories is an epistemic form of death-by-a-thousand-cuts. The tragedy is that rationality may not guard against it.


What next?
If you​ liked this post, consider sharing it on social media or signing up for the newsletter.
For more on belief perseverance, check out the recent controversy over the "backfire effect"—the result that sometimes people double-down on their beliefs in the face of corrections. See e.g. Nyhan and Reifler 2010 for the initial effect, and then Wood and Porter 2019 for a replication attempt that argues that the effect is very rare.
For more discussion, see the reddit thread on this post.

Appendix

Consider the Baya case. Why does the reasoning go through generally?

P is a random variable—a function from possibilities to numbers—that measures how well she would do on a test if she were to retake it.  Given her (probabilistic) credence function C, her estimate for P is

Picture
In general, her conditional estimate for P, given information X, is:
Picture
Here are the needed assumptions for the above reasoning to go through.

First, information about whether the feedback was bunk screens off whether she was told it was bunk from her estimate about P:
Picture
and
Picture
What should her estimate be in those two cases?

Well, conditional on the feedback but also it being bunk, she should revert to her initial estimate E[P]:​​​
Picture
Conditional on it not being bunk, she should move her estimate in the direction of the feedback—so if the feedback is positive, she should raise her estimate of her performance:
Picture
But of course she doesn’t know whether it was bunk or not—all she knows is that she was told it was bunk.  By the law of total expectation, and then our screening off assumption, we have:
Picture
By parallel reasoning, if she gets negative feedback she should end up with a lower estimate than her initial one:
Picture
And thus, whenever she doesn’t completely trust the experimenters when they tell her the info is bunk, the feedback should still have an effect:
Picture
​Which, of course, is just the belief perseverance effect.
12 Comments
Peter Gerdes
4/3/2021 03:47:08 pm

That argument's way too fast. You can't just assume that the probability distribution given Told got 14/15 & told was lie & was lie is the same as the probability distribution given no information.

Specifically, you assume that being told by the researchers that you got 14/15 and then being told "no sorry that was a lie" should make it more likely than it was previously to believe that you really got a 14/15. Ok, that seems plausible insofar as it makes 14 more likely than 15 but I think it also makes it far more likely you did really bad at all (one obvious thing for researchers to do if they are lying is reverse them).

I mean, I think the most intuitive prior would say that it's just as likely or more that researchers who tell you you got 14/15 and then say they lied flipped the scores of performers as it is that they really graded them, told you a correct score and then lied about it being a lie.

I admit the experiment isn't perfect but I think if you went ahead and asked anyone in the experiment if they were thinking "well maybe the experimenters lied" I doubt they would say yes.

—-

But this is an easy problem to fix. We just need to find a version of the experiment that increases the confidence the participants have the papers really weren't graded. Simple solution is to set up a video which shows the received papers being burned immediately after collection or other such device and see if it eliminates the effect.

I'm willing to bet it won't.

Reply
Kevin
4/4/2021 01:11:35 pm

Thanks for the thoughts!

Not sure I'm following all the details. First point: as I mentioned, when they do a more extensive debriefing so that people really do believe the feedback didn't tell them anything, the effect DOES go away. Isn't this the sort of variation you were referring to at the end? To me that suggests that trust of the setup is playing a big role.

Second, the thing you said I'm assuming is what I was deriving from the screening-off and lack-of-full trust assumptions; so I take it you're rejecting one of those. But which one? (Obviously I agree they CAN fail in situations like this; the claim is just that it's not implausible that often they hold.)

Some more details: as I was intending it to be interpreted, "bunk" means "the feedback you received was not indicative of your performance", so "not bunk" means "the feedback you received WAS indicative of your performance". Are you happy wit the assumption that Baya's estimate, conditional on "told 14/15 correct, and that feedback WAS indicative of my performance" should be higher than it was initially? That seems pretty solid to me.

So I take it you're questioning either (1) that E[P | +F, bunk] = E[P], or the first screening-off condition that (2) E[P | +F, told bunk, bunk] = E[P | +F,bunk]. It sounds like you're thinking that E[P | +F,told bunk, bunk] should actually be much LOWER than her initial estimate? But why would that be. If your initial estimate is middling, and you fully trust me and (switching the order here) I tell you "what I'm about to tell you has nothing to do with your performance. You got 14/15 right", surely shouldn't take this to be strong evidence that you got way LESS of them right than you initially did. After all, I basically just told you to disregard my statement! If you completely trust me (so "told bunk" is equivalent to "told bunk & bunk"), then shouldn't you disregard it?

Reply
Eric
4/5/2021 09:15:13 am

Hi, Kevin,

Two things:

First, your blog post basically provides a nice theoretical backdrop for an article the NY Times ran on the same day. Apparently, Denver Riggleman, a former GOP congressman, is trying to combat the belief perseverance effect you're interested in. Maybe an interview opportunity for you?

https://www.nytimes.com/2021/04/03/us/politics/denver-riggleman-republican-disinformation.html?referringSource=articleShare

Second, more broadly, I am wondering what we gain by capturing the "rationality" at stake in a cognitive bias like belief perseverance? If I understand you, your bigger picture idea is that belief perseverance adheres to a bayesian model for rationality-- though biased, subjects update their expected probabilities as a bayesian model would recommend. And so belief perseverance is rational qua such a bayesian model. So instead of saying that someone who suffers from belief perseverance is thinking irrationally, we should, according to you, say they are thinking rationally. But what's the conceptual upshot here? Couldn't we argue that we call belief perseverance 'irrational' because of how non-cognitive psychological states distort the bayesian update function. In the case you examine, these might be the states that encode the "amount of trust" a subject puts in someone giving them information. And it's the presence of those states and their collateral distorting effect that grounds our use of 'irrational'?

The blog's great! Super interesting! Keep up the great work!

Reply
Kolja Keller
4/7/2021 08:16:50 am

I'm not Kevin, but I agree with what he says here and the general project, so I have a view on the upshost: First, psychologists need to stop being blunt externalists about rationality when they're testing who is and isn't irrational. That way they're not mislabeling phenomena as irrational that aren't.

And the second upshot is that this can give us a better idea for how to fix our democracies because, contrary to what the psychologists say, we're not surrounded by a bunch of deeply irrational people, but instead we're surrounded by people having gotten into a real pickle of an evidential situation.

It can help us come up with better strategies than "fact checking" which becomes useless in curing bad beliefs as soon as the person believing the falsehoods has reason to distrust the fact checker. But the more a fact-checker disagrees with you, the more reason you get for distrusting them. And that's not irrational, since you couldn't rationally believe that you have less than .5 chance of generally getting things correct.

So, this can get us maybe on the path for figuring out how to rationally get someone out of the hole when their overall beliefs in some area are falling below .5 chance of being correct.

And the studies hint at a solution: We can't rely on testimony by some putative authority to set them straight. Because they'll disagree so much that they think the authority isn't one. Instead, you need to, in detail, spell out the evidence for whatever is false without leading with the conclusion. And that, unfortunately, takes a lot of finesse, time, and energy.

So, practically, you wouldn't want a fact-checker leading with their fact-o-meter. You want them to lay out all the relevant evidence, and show their work that they didn't select evidence in a biased way, and then make people read it. That would allow them to *see* that they're wrong without having to believe someone just saying that they're wrong.

Reply
Eric
4/7/2021 02:48:48 pm

Interesting ideas. Though I will say the promise of your sketched alternative panning out are dubious. That interview I linked to in my original post showcases the failure of just the sort of granular deliberations you offer as a viable alternative to "fact-checking". And it seems like such failures are not strictly speaking rational but rather a non-cognitive commitment to a belief state.

Kevin
4/7/2021 06:13:57 pm

Thanks both! Super great questions/comments. I don't have any super easy answers, though I'm definitely inclined to agree with the main points that Kolja mentioned—we at least want to make sure we're understanding the causes of the phenomenon properly, if we're to combat it.

One thing I lend some credence to is the idea that irrationalist explanations of these sorts of things actually can exacerbate affective polarization (disliking the other side), since once I think they have their beliefs for irrational reasons, it's easier to think their beliefs and actions are not just misguided but immoral. (The "Plea for Political Empathy" post makes the argument for this—I won't pretend it's a knockdown one.) So that's one reason why reconceptualizing it might matter.

I also think it's definitely right that other factors (e.g. evidence about how/why people trust things) could show that the process as a whole is irrational! I'd just take this argument to show that the mere qualitative findings of belief persistence from these studies actually are exactly what we should expect from rational folks, so that these studies themselves shouldn't be taken to bolster our confidence that the explanation is irrationality.

Also, thanks for the NYT piece recommendation! Looks super interesting.

David
4/7/2021 02:01:25 pm

I just wanted to say that while this is a great explanation as to how people get to the wrong conclusion I still feel it is irrational, if you find out that the examiner is unreliable then the rational conclusion you can draw is exactly that and without some other information not provided in the scenario, then there isn't any reason any information they have provided should be used, leaving you with only your original E[P] estimate.

The critical problem is that people substitute rational analysis of the reliability of information with Trust. So when that information comes into question (such as them declaring their first response was bunk) people don't properly reevaluate the reliability of their information but instead continue to accord it trust and reliability, that the info is contradictory might result in an average probability but they are still according both statements a level of reliability above neutral that they shouldn't have.

Reply
Kevin
4/7/2021 06:17:22 pm

Thanks for the comment!

Hm, interesting. So are you suggesting that in the cases at hand, the irrationality is simply that they are not certain that the feedback they got was bunk, once they're told it is? I certainly agree that in some cases that could be irrational, but I'd also think in some it wouldn't be—the structure of the experiment doesn't do enough to determine it.

But maybe I've misread you, and you're instead questioning one of the screening-off premises, like Peter did above? That's definitely a way to go; I tried to say in response to him why I don't think it's plausible in lots of cases, but there's obviously more to be said.

Reply
Nathan
4/9/2021 06:57:14 pm

In other words, because we must take the source at their word for the claim's truth, the claim's credibility becomes indexed to the credibility of its source, instead of a situation where each claim can have its own freestanding credibility rating. Even if we know we can only give a low level of belief to a source, because that source's credibility remains above zero it still affects our overall belief. This is only a problem in situations where a claim's credibility MUST be derived from the credibility of its source, something Kolja's proposed solution points out. If we could do a kind of objective catalogue of the evidence in an unbiased way, a kind of "sourceless" presentation, then that might work. But this puts us up against the practical limits of doing a presentation like that for every issue where this problem pops up.

How can a slow, rational process like that keep pace with the speed of social media? In settings like that reliance on trust in sources seems at its highest.

Reply
Kevin
4/10/2021 09:47:13 am

Yeah, I think that's a good way to put it! As information sources get more fast-paced and varied, people can (and have to!) use their own assessments of trustworthiness to filter what to look at and how much to believe it far more than they used to. This leads to all sorts of interesting dynamics, like the fracturing of the media landscape. But it's really a relatively simple (and sensible) mechanism, so it's hard to know how to combat it effectively.

Reply
Emma
5/16/2021 11:40:50 pm

Hi Kevin, thank you for a really interesting article. I know many of us — myself included — have watched loved ones slowly fall prey to disinformation, and felt helpless when they resort to the same conspiracies after deep conversations debunking them.

One small, semantic point that I disagree with is your final sentence: “the tragedy is that rationality may not guard against it.” My issue lies with the word “guard.” The reason for this is that your article considers people at the tail end of the conspiracy-consumption life cycle: the Uncle Ron figure has already fully subscribed to conspiracy theories, and is likely immersed in social media communities where these conspiracies are frequently promulgated. Your article demonstrates the difficulty in deradicalizing people from conspiracy theories, because even if the underpinnings of each belief are debunked, and overarching suspicion and susceptibility to radicalization lingers. Additionally, rationality and evidence-based arguments may have very little sway over an Uncle Ron who may be emotionally, socially, and possibly even financially invested in conspiracy communities.

However, rationality through skills such as integrated complexity, critical thinking, and media literacy can be very effective in helping consumers to identify and avoid disinformation and conspiracy theories at the beginning of the conspiracy-consumption life cycle. Radicalization can be prevented by education that emphasizes rationality and critical thinking and counters binary “black and white” thinking. While this style of intervention may not help someone as far in as Uncle Ron to easily find the off ramp from conspiracies, there is empirical evidence that media literacy programming can make a tangible difference in building resilience and “guarding" against radicalization.

If you are interested, I’ve linked a few of many interesting studies/programs around this topic below. Included is the work of Sara Savage, a social psychologist at Cambridge who has led a number of empirically-driven interventions to prevent violent extremism via the integrated complexity model. There are also a number of interactive serious games based on the theory of “active inoculation” that help youth identify disinformation. Active inoculation serious gaming is increasingly being implemented in the US and abroad.

-One of Dr. Savage’s articles: https://core.ac.uk/download/pdf/71953821.pdf
Fakescape serious game: https://www.fakescape.cz/en
-Research backing Harmony Square: https://misinforeview.hks.harvard.edu/article/breaking-harmony-square-a-game-that-inoculates-against-political-misinformation/
-Research on active inoculation and “prebunking” https://www.tandfonline.com/doi/full/10.1080/10463283.2021.1876983?scroll=top&needAccess=true
-UN Chronicle article: https://www.un.org/en/chronicle/article/media-and-information-literacy-means-preventing-violent-extremism
-OSCE article: https://www.osce.org/secretariat/475496

All in all, rationality can and does help guard against disinformation and radicalization. I know this is such a small nit-pick in an otherwise great article, but I think it’s important to emphasize that in many cases, rationality may in fact be the very tool that helps someone on the brink of conspiracy to re-evaluate their sources.

Reply
Kevin
5/19/2021 10:47:03 am

Hi Emma,

Thanks so much for this! Super interesting stuff. I wasn't aware of Savage's work on this, and many of the things I've read have been about the overall INeffectiveness of trying to rein in various patterns of thinking that lead to conspiracy theories (e.g. the broad-scale failures of debiasing methods in psychology, like https://journals.sagepub.com/doi/10.1111/j.1745-6924.2009.01144.x), and some of the work that suggests that increases in cognitive reflection/sophistication lead to MORE biased processing of information (e.g. https://cpb-us-w2.wpmucdn.com/u.osu.edu/dist/e/65099/files/2018/08/2017_Kahan_et_al_motivated_numeracy_and_enlightened_selfgovernment-q52qcx.pdf), which had led me to an overall negative impression of such individual-level interventions.

But the links you suggest offer an interesting alternative picture that I'll need to look into more. So I'll definitely do that; thanks very much!

Kevin

Reply



Leave a Reply.

    Kevin Dorst

    Philosopher at MIT, trying to convince people that their opponents are more reasonable than they think

    Quick links:
    - What this blog is about
    - ​Reasonably Polarized series
    - RP Technical Appendix

    Follow me on Twitter or join the newsletter for updates.

    RSS Feed

    Archives

    April 2021
    March 2021
    October 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020

    Categories

    All
    All Most Read
    Conjunction Fallacy
    Framing Effects
    Gambler's Fallacy
    Overconfidence
    Polarization
    Rationalization
    Reasonably Polarized Series

  • Bio
  • Research
  • Teaching
  • Public Philosophy
  • Stranger Apologies