KEVIN DORST
  • Bio
  • Research
  • Teaching
  • Public Philosophy
  • Stranger Apologies

Stranger Apologies

Why Arguments Polarize Us

10/31/2020

10 Comments

 
(1400 words; 7 minute read.)
The most important factor that drove Becca and me apart, politically, is that we went our separate ways, socially. I went to a liberal university in a city; she went to a conservative college in the country. I made mostly-liberal friends and listened to mostly-liberal professors; she made mostly-conservative friends and listened to mostly-conservative ones.

​As we did so, our opinions underwent a large-scale version of the group polarization effect: the tendency for group discussions amongst like-minded individuals to lead to opinions that are more homogenous and more extreme in the same direction as their initial inclination.

The predominant force that drives the group polarization effect is simple: discussion amongst like-minded individuals involves sharing like-minded arguments. (For more on the effect of social influence, see this post.)

Spending time with liberals, I was exposed to a predominance of liberal arguments; as a result, I become more liberal.  Vice versa for Becca.

Stated that way, this effect can seem commonsensical: of course it’s reasonable for people in such groups to polarize. For example: I see more arguments in favor of gun control; Becca sees more arguments against them. So there’s nothing puzzling about the fact that, years later, we have wound up with radically different opinions about gun control. Right?

Not so fast.
In 2010, Becca and I each knew where we were heading. I knew that I’d be exposed mainly to arguments in favor of gun control, and that she’d be exposed mainly to arguments against it.  At that point, we both had fairly non-committal views about gun rights—meaning that I didn’t expect the liberal arguments I’d witness to be more probative than the conservative arguments she’d witness.

This gives us the puzzle. At a certain level of generality, I know nothing about about gun rights now that I didn’t in 2010.  Back then, I knew I’d see arguments in favor, but I was not yet persuaded. Now in 2020 I have seen those arguments, and I am persuaded.

Why the difference? How could receiving the arguments in favor of gun control have a (predictably) different effect on my opinion than predicting that I’d receive such arguments? And given that I could’ve easily gone to a more conservative college and wound up with Becca’s opinions, doesn’t this mean that my current opinion that we need gun control was determined by arbitrary factors. In light of this, how can I maintain my firm belief?

As we’ve seen, ambiguous evidence can in principle offer an answer to these questions. Here I’ll explain how this answer can apply to the group polarization effect.

Begin with a simple question: why do arguments persuade? You might think that’s a silly question—arguments in favor of gun control provide evidence for the value of gun control, and people respond to evidence; so rational people are predictably convinced by arguments. Right?

Wrong. Arguments in favor of gun control don’t necessarily provide evidence for gun control—it depends on how good the argument is!

When someone presents you with an argument for gun control, the total evidence you get is more than just the facts the argument presents; you also get evidence that the person was trying to convince you, and so that they were appealing to the most convincing facts they could think of.

If the facts were more convincing than you expected—say, “Guns are the second-leading cause of death in children”—then you get evidence favoring gun control. But if the facts were less convincing than you expected—say, “Many people think we should ban assault weapons”—then the fact that this was the argument they came up with actually provides evidence against gun control. (H/t Kevin Zollman on October surprises.)

This is a special case of the fact that, when evidence is unambiguous, there’s no way to investigate an issue that you can expect to make your more confident in your opinion.

Why, then, do arguments for a claim predictably shift opinions about it?

My proposal: by generating asymmetries in ambiguity. They make it so that reasons favoring the claim are less ambiguous—and so easier to recognize—than those telling against it.

Here’s an example—about Trump, not gun control.
Argument: “Trump’s foreign policy has been a success. In particular, his norm-breaking personality was needed in order to shift the political consensus about foreign policy and address the growing confrontation between the U.S. and an increasingly aggressive and dictatorial China.”
This is an argument that we should re-elect Trump. Is this a good argument—that is, does it provide evidence in favor of Trump being re-elected?

​I’m not sure—the evidence is ambiguous. What I am sure of is that if it is a good argument, then it’s so for relatively straightforward reasons: “Trump’s presidency has had and will have good long-term effects.”

If it’s not a good argument, then it’s for relatively subtle reasons: perhaps we should think, “Is that the best they can come up with?”; perhaps we should think that relations with China were already changing before Trump; perhaps we should be unsure whether inflaming the confrontation is a good thing; etc.

Regardless: if it’s a good argument, it’s easier to recognize as such than if it’s a bad argument As a result, we can expect to be, on average, somewhat persuaded by arguments like this.

As before, it’s entirely possible for this to be so and yet for the argument to satisfy the value of evidence—and, therefore, for us to expect that it’ll make us more accurate to listen to the argument rather than ignore it.  At each instant we're given the option to listen to a new argument or not, we should take it if we want to make our beliefs accurate.

And yet, because each argument is predictably (somewhat) persuasive, long periods of exposure to pro (con) arguments can lead to predictable, profound, but rational shifts in opinions.

We can model how this worked with me and Becca.  The blue group (me and my liberal friends) were exposed to arguments favoring gun control—that is, arguments that were less ambiguous when they supported gun control than when they told against it. The red group (Becca and her conservative friends) were exposed to arguments disfavoring gun control—that is, arguments that were more ambiguous when they supported gun control than when they told against it.

Suppose that, as a matter of fact, 50% of the arguments point in each direction. Because of the asymmetries in our ability to recognize which arguments are good and which are bad, the result is polarization:
Picture
Simulation of 20 (blue) agents repeatedly presented with arguments that are less ambiguous when they support Claim, and 20 (red) agents presented with arguments that are less ambiguous when they tell against Claim. In fact, 50% of arguments point in each direction. Thick lines are averages within groups.
Notably, although such ambiguity asymmetries are a force for divergence in opinion, that force can be overcome.  In particular, as the proportion of actually-good arguments gets further from 50%, convergence is possible even in the presence of such ambiguity-asymmetries.  Here’s what happens when 80% of the arguments provide evidence for gun control:
Picture
As above, but this time, in fact, 80% of arguments tell in favor of Claim.
Thus polarization due to differing arguments is not inevitable—but the more conflicted and ambiguous the evidence is, the more likely it is.

How plausible is this as a model of the group polarization effect? Obviously there’s much more to say, but there is indeed some evidence that group-polarization effects—and argumentative persuasion in particular—are driven by ambiguous evidence.

First, the group discussions that induce polarization also tend to reduce the variance in people's opinions, and the amount of group shift in opinion is correlated with the initial variance of opinion. 

These effects make sense if ambiguous arguments are driving the process. For: (1) the more ambiguous evidence is, the more variance of opinion we can expect; (2) if what people are doing is coordinating on what to think about various pieces of evidence, we would expect discussion to reduce ambiguity and hence reduce opinion-variance; and (3) the more initial variance there is, the more diverse the initial bodies of evidence were—so the more we should discussion to reveal new arguments, and hence to lead participants to shift their opinions.

Second, participation in the discussion (as opposed to mere observation) heightens the polarization effect, as can merely thinking about an issue—especially if people are trying to think of reasons for or against their position or expecting to have a debate with someone about it.  

If what people are doing in such circumstances is formulating arguments (perhaps through a mechanism of cognitive-search), then these are exactly the effects that this sort of model of asymmetrically-ambiguous arguments would predict—for they are in effect exposing themselves to a series of asymmetrically-ambiguous arguments.

Upshot: it is not irrational to be predictably persuaded by (ambiguous) arguments—and the more people separate into like-minded groups, the more this leads to polarization.


What next?
If you liked this post, consider subscribing to the newsletter or spreading the word.
For the details of the models of asymmetrically-ambiguous arguments and the simulations that used them, check out the technical appendix (§7).
Next Post: we'll return to confirmation bias and see how selective search for new information is rational in the context of ambiguous evidence.
10 Comments
Tom Stafford link
11/3/2020 01:46:33 am

I am a little confused about your claim here. Are you saying that arguments are ambiguous because of the *prior beliefs* of arguers (and so it is easier for Kevin to recognise a good pro-gun control argument? Or is it that there is something in the structure of arguments which get rehearsed within a polarising social group (e.g. they are formulated as being pro a particular position), which means the structure of the argument - not properties of the arguer - makes it provide asymmetrically ambiguous evidence?

Reply
Kevin
11/5/2020 08:23:22 am

Thanks Tom––sorry to be unclear! The claim was about argument in general; or, maybe better, minimally-decent arguments in general. The thought being that from a unambiguous-Bayesian perspective, it's unclear why hearing arguments favoring P could be expected to raise people's (your) credence in P. But we get an explanation (and a toy model of it) once we allow that an argument for P can be understood as a way to present evidence such that the P-favoring reasons it contains are less ambiguous and P-disfavoring reasons it contains are more ambiguous. In that sense, it was meant to be relatively independent of priors.

But of course (and maybe this is why you are puzzled?) that's only going to be *relatively* independent of priors, since of course what reasons point which way is dependent on background beliefs/knowledge.

The reason I'm looking for a mechanism that's relatively prior-independent is that I want to account for how group-polarization-like effects tend to happen even for those who are initially skeptical. (E.g.: the way Pascal tells you to try to convince yourself that God exists, if you're skeptical, is to spend time with people who believe it, learn to respect them and see the value in their arguments and views, etc.)

Does that answer your question? (Or raise a worry?)

Reply
Tom Stafford link
11/6/2020 12:33:29 am

Thanks, that helps. I think I was confusing the need for an account of why arguments persuade with the need for an overall account of polarisation

Mario Scharfbillig
11/4/2020 03:56:45 am

OK, if we believe the claim that is all well and good. Will you also do an analysis for pluralistic environments? It seems to me that your analysis so far is relatively convincing when there is only one right/wrong argument AND when there are only two sides providing the arguments (Rep/Dem). But what if you have multiple parties with different agendas and you need to take that into account? If you could figure that out, that would be interesting and more representative if the world.

Reply
Kevin
11/5/2020 08:29:04 am

Thanks Mario! Great question.

As I see it, when we fix an issue—like, P: whether gun control is a good idea—then every argument you'll encounter will (relative to your background knowledge) either tell in favor of P, tell against P, or be neutral wrt P. Of course, will be a TON of different types of arguments, which will appeal to different people and use different types of evidence, so we absolutely need to account for environments where people are exposed to a variety of different ones—and, importantly, where they have some degree of choice about which ones they hear.

That last step is the one that's next-in-line for a post, where we take these types of argument models and give people choices about which ones to listen to. (Sort of like the choices of cognitive searches from the last post.) The result is going to be that when we do that and have people always get the argument they expect to make them most accurate, they'll (1) have a preference for arguments that favor their prior beliefs, and (2) this will lead to some measure of polarization. I think this amounts to something that looks a lot like "selective exposure" side of confirmation bias that I mentioned in the previous post.

Does that sound like it'll address your concerns, or did I miss the mark?

Reply
mason
12/30/2020 03:47:20 pm

Hi Kevin—absolutely loving this series! I was interested in your characterization of the ambiguity of arguments. You say if an argument is good, it is for straightforward reasons, but if it is not good, it is for relatively subtle reasons. So, the reasoning goes, we should think that arguments are ambiguous, providing strong reasons for, or weak reasons against.

I suppose I was thinking that the analogy with the word completion task ran in the opposite direction. When we confront an argument, we might endeavor to find problems with it. If we find a problem, we have a decisive reason to discount the argument (e.g. it equivocates, or whatever). We've 'found the word', as it were. If we *do not* find a problem, we're in a situation analogous to not finding a word in the completion task. For all we can tell, it's a good argument, but we can't actually rule out its being a bad argument (maybe we just didn't find the flaw). That sounds like the situation in which we cannot find the word.

Can you help me out? Thanks so much!

Reply
Kevin
1/6/2021 08:49:35 am

Hi Mason,

Thank you! This is a great question, and one that I don't yet have any answer I'm 100% happy with. I've been thinking over issues like this since I wrote the post, and my inclination is to say the following.

What the structure of the ambiguity is depends on how you decide to *engage* with the argument. If you decide to scrupulously scrutinize it, then you're absolutely right—via the process of scrutinization, you'll either find a problem (in which case you get unambiguous evidence that the argument is flawed), or you don't (in which case you get ambiguous evidence that the argument is sound). That seems totally right.

But contrast that to a case in which you *don't* scrutinize the argument—who has the time to think carefully about the details of all the argument they're presented about politics, after all? E.g. imagine you just skim the headlines or top articles on a news site, without thinking too hard about them. In that case, failing to find a flaw is no (or: little) evidence that there's not one, since finding flaws takes cognitive effort and you weren't investing that effort. And, by design, the argument is going to be presenting reasons that, on their face (without scrutiny), DO support the position being argued for. In THAT situation, then, the claim I think is plausible is that if the argument is good, it's less ambiguous evidence than if it's bad. Maybe here's one way to put it: if the argument is good, it's face-value reading is right, so you don't NEED to do any cogntiive searches to evaluate it. On the other hand, if it's bad, it's face-value reading is wrong, so you DO need to do a cognitive search to know how to evaluate it. If we further suppose that you DON'T do a cognitive search, then what you're left with is ambiguous evidence. In short, the characterization of the ambiguity of arguments I gave works for arguments that are not scrutinized. What do you think?

Writing this out, this makes me think it should actually refine my picture of confirmation bias / biased assimilation from the previous post as well. Now I'm thinking it's clearer to think of the whole process as a potentially two-stage process. First, you get the argument, and, before you've had the chance to scrutinize it, it's either good (unambiguous in favor) or bad (ambiguous against). Then you have a choice: scrutinize, or no? If you don't, you're left with the evidence you had at the outset. If you do, there's a potential to uncover unambiguous evidence that it's flawed, giving you overall, unambiguous evidence against the conclusion of the argument. There's also the possibility that you fail to find a flaw, giving you ambiguous evidence favoring the conclusion. This makes me think I should combine the "ambiguity asymmetry" models of arguments with the "cognitive search" models from before. Will give a crack at that and see how it goes!

Let me know what you think, if you have the chance. Thanks so much for the question—super helpful!

Kevin

Reply
Tom
3/21/2021 09:53:06 am

This is really interesting but how does this deal with common knowledge of rationality? Is each person aware of the distribution of beliefs in the population? I'd always thought we need to assume some basic cognitive arrogance to explain persistent divergence in beliefs alongside common knowledge of that divergence.

Reply
Kevin
3/22/2021 08:27:43 am

Thanks! Great question. There are two ways the models I use here avoid the Aumann no-agreeing-to-disagree result.

The first is that agents don't always update by conditioning on a partition. Instead, they often do Jeffrey shifts––where those shifts are constrained by the "value of rationality" (i.e. IJ Good-style value of evidence constraint). The technical appendix for this post shows what the updates look like.

The second is that even though each of these agents IS rational (in the sense that their credences match the designated credence function that obeys the value of evidence), the point of having ambiguous evidence (in the technical sense) is that they're unsure whether they're rational. As a result, they don't even know that they themselves are rational, let alone have common knowledge that each other that they are rational.

Nevertheless, there can be approximations thereof that they do satisfy. For example, they both can be arbitrarily-confident that the rational credence in P for one group is above 0.7, while the rational credence for the other is 0.3, and be arbitrarily confident that both sides are arbitrarily confident of that, and so on. This earlier blog post (https://www.kevindorst.com/stranger_apologies/rational-polarization-can-be-profound-persistent-and-predictable) and the attached technical appendix spells this out in the most detail I've written up so far, but as soon as the draft of the paper is ready there'll be a more detailed explanation.

Thanks for the question!

Reply
Tom
3/31/2021 06:07:26 pm

Thanks for the reply!

I don't follow it fully but will have a look at the appendix, & will look for the full paper when it's out.




Leave a Reply.

    Kevin Dorst

    Philosopher at MIT, trying to convince people that their opponents are more reasonable than they think

    Quick links:
    - What this blog is about
    - ​Reasonably Polarized series
    - RP Technical Appendix

    Follow me on Twitter or join the newsletter for updates.

    RSS Feed

    Archives

    April 2021
    March 2021
    October 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020

    Categories

    All
    All Most Read
    Conjunction Fallacy
    Framing Effects
    Gambler's Fallacy
    Overconfidence
    Polarization
    Rationalization
    Reasonably Polarized Series

  • Bio
  • Research
  • Teaching
  • Public Philosophy
  • Stranger Apologies