KEVIN DORST
  • Bio
  • Research
  • Teaching
  • Public Philosophy
  • Stranger Apologies

Stranger Apologies

Reasonably Polarized: Why politics is more rational than you think.

9/5/2020

39 Comments

 
(1700 words; ​8 minute read.)
[9/4/21 update: if you'd like to see the rigorous version of this whole blog series, check out the paper on "Rational Polarization" I just posted.]
​
​​​A Standard Story
I haven’t seen Becca in a decade. I don’t know what she thinks about Trump, or Medicare for All, or defunding the police.

But I can guess.

​​Becca and I grew up in a small Midwestern town. Cows, cornfields, and college football. Both of us were moderate in our politics; she a touch more conservative than I—but it hardly mattered, and we hardly noticed.

After graduation, we went our separate ways. I, to a liberal university in a Midwestern city, and then to graduate school on the East Coast. She, to a conservative community college, and then to settle down in rural Missouri.

I––of course––became increasingly liberal. I came to believe that gender roles are oppressive, that racism is systemic, and that our national myths let the powerful paper over the past.

And Becca?
You and I can both guess how her story differs. She’s probably more concerned by shifting gender norms than by the long roots of sexism; more worried by rioters in Portland than by police shootings in Ferguson; and more convinced of America’s greatness than of its deep flaws.

​In short: we started with similar opinions, set out on different life trajectories, and, 10 years down the line, we deeply disagree.

So far, so familiar. The story of me and Becca is one tiny piece of the modern American story: one of pervasive—and increasing—political polarization.

It’s often noted that this polarization is profound: partisans now disagree so much that they often struggle to understand each other.

It’s often noted that this polarization is persistent: when partisans sit down to talk about their now-opposed beliefs, they rarely rethink or revise them.

But what’s rarely emphasized is that this polarization is predictable: people setting out on different life trajectories can see all this coming. When Becca and I said goodbye in the summer of 2010, we both suspected that we wouldn’t be coming back. That when we met again, our disagreements would be larger. That we’d understand each other less, trust each other less, like each other less.

And we were right.  That’s why I haven’t seen her in a decade.

Told this way, the story of polarization raises questions that are both political and personal. What should I now think of Becca—and of myself? How should I reconcile the strength of my current beliefs with the fact that they were utterly predictable? And what should I reach for to explain how I came to disagree so profoundly with my old friends?

The standard story: irrationality.

The story says, in short, that politics makes us stupid. That despite our best intentions, we glom onto the beliefs of our peers, interpret information in biased ways, defend our beliefs as if they were cherished possessions, and thus wind up wildly overconfident. You’ve probably heard the buzzwords: “confirmation bias”, “the group-polarization effect”, “motivated reasoning”, “the overconfidence effect”, and so on.

This irrationalist picture of human nature has quite the pedigree—it has won  Nobel Prizes, started academic subfields, and embedded itself firmly in the popular imagination.

When combined with a new wave of research on the informational traps of the modern internet, the standard story offers a simple explanation for why political polarization has exploded: our biases have led us to mis-use our new informational choices. Again, you’ve probably heard the buzzwords: “echo chambers”, “filter bubbles”, “fake news”, “the Daily Me”, and so on.

The result?  A bunch of pig-headed people who increasingly think that they are right and balanced, while the other side is wrong and biased.

It’s a striking story. But it doesn’t work.

It says that polarization is predictable because irrationality is predictable: that Becca and I knew that, due to our biases, I’d get enthralled by liberal professors and she’d get taken in by conservative preachers.

But that’s wrong. When I looked ahead in 2010, I didn’t see systematic biases leading to the changes in my opinions. And looking back today, I don’t see them now.

If I did see them, then I’d give up those opinions. For no one thinks to themselves, “Gender roles are oppressive, racism is systemic, and national myths are lies—but the reason I believe all that is that I interpreted evidence in a biased and irrational way.” More generally: it’s incoherent to believe that your own beliefs are irrational. Therefore, so long as we hold onto our political beliefs, we can’t think that they were formed in a systematically irrational way.

So I don’t see systematic irrationality in my past. Nor do I suspect it in Becca’s. She was just as sharp and critically-minded as I was; if conservative preachers changed her mind, it was not for a lack of rationality.

It turns out that tellers of the irrationalist tale must agree. For despite the many controversies surrounding political (ir)rationality, one piece of common ground is that both sides are equally susceptible to the factors that lead to polarization. As far as the psychological evidence is concerned, the “other side” is no less rational than you––so if you don’t blame your beliefs on irrationality (as you can’t), then you shouldn’t blame theirs on it either.

In short: given that we can’t believe that our own beliefs are irrational, the irrationalist explanation of polarization falls apart.

Suppose you find this argument convincing. Even so, you may find yourself puzzled. After all: what could explain our profound, persistent, and predictable polarization, if not for irrationality? As we’ll see, there’s a genuine philosophical puzzle here. And when we can’t see our way to the solution, it’s very natural to fall back on irrationalism.

In particular: since we can’t view our own beliefs as irrational, it’s natural to instead blame polarization on the other side’s irrationality: “I can’t understand how rational people could see Trump so differently. But I’m not irrational—so the irrational ones must be Becca and her conservative friends, right?”

That thought turns our disagreement into something more. Not only do we think the other side is wrong—we now think they are irrational. Or biased. Or dumb. And that process of demonization—more than anything else—is the sad story of American polarization.
  (650 words left)
A Reasonable Story     
What if it need not be so? What if we could think the other side is wrong, and not think they are dumb? What if we could tell a story on which diverging life trajectories can lead rational people—ones who care about the truth—to be persistently, profoundly, and predictably polarized?

That’s what I’m going to do. I’m going to show how findings from psychology, political science, and philosophy allow us to see polarization as the result of reasonable people doing the best they can with the information they have. To argue that the fault lies not in ourselves, but in the systems we inhabit.  And to paint a picture on which our polarized politics consists largely of individually rational actors, collectively acting out a tragedy.

Here is the key. When evidence is ambiguous––when it is hard to know how to interpret it—it can lead rational people to predictably polarize.

This is a theorem in standard (i.e. Bayesian) models of rational belief. It makes concrete and confirmed empirical predictions. And it offers a unified explanation of our buzzwords: confirmation bias, the group-polarization effect, motivated reasoning, and the overconfidence effect are all to be expected from rational people who care about the truth but face systematically ambiguous evidence.

More than that: this story explains why polarization has exploded in recent decades. Changes in our social and informational networks have made is so that, with increasing regularity, the evidence we receive in favor of our political beliefs tends to be unambiguous and therefore strong—while that we receive against them tends to be ambiguous and therefore weak. The rise in this systematic asymmetry is what explains the rise in polarization.

In short: the standard story is right about which mechanisms lead people to polarize, but wrong about what this means about people. People polarize because they look at information that confirms their beliefs and talk to people that are like-minded. But they do these things not because they are irrational, biased, or dumb. They do them because it is the best way to navigate the landscape of complex, ambiguous evidence that pervades our politics.

That’s the claim going to defend over the coming weeks.

Here’s how. I’ll start with a possibility proof—a simple demonstration of how ambiguous evidence can lead rational people to predictably polarize. Our goal will then be to figure out what this demonstration tells us about real-world polarization.

To do that, we need to dive into both the empirical and theoretical details. In what sense has the United States become increasingly ”polarized”—and why? What would it mean for this polarization to be “rational”—and how could ambiguous evidence make it so? How does such evidence explain the mechanisms that drive polarization—and what, therefore, might we do about them? I’ll do my best to give answers to each of these questions.

This, obviously, is a big project. That means two things.

First, it means I’m going to present it in two streams. The core will be this blog, which will explain the main empirical and conceptual ideas in an intuitive way. In parallel, I’ll post an expanding technical appendix that develops the details underlying each stage in the argument.

Second, it means that this is a work in progress.  It’ll eventually be a book, but—though I’ve been working on it for years—getting to a finished product will be a long process. This blog is my way of nudging it along.

That means I want your help! The more feedback I get, the better this project will become. I want to know which explanations do (and don’t) makes sense; what strands of argument are (and are not) compelling; and—most of all—what you find of value (or not) in the story I’m going to tell.

So please: send me your questions, reactions, and suggestions.

And together, I hope, we can figure out what happened between me and my old friends—and, maybe, what happened between you and yours.


What next?
If you’re interested in this project, consider signing up for the newsletter, following me on Twitter, or spreading the word.
Next post: an experiment that demonstrates how ambiguous evidence can lead people to polarize.

​
PS. Thanks to Cosmo Grant, Rachel Fraser, and Ginger Schultheis for helpful feedback on previous drafts of this post—and to Liam Kofi Bright, Cailin O'Connor, Kevin Zollman, and especially Agnes Callard for much help and advice getting this project off the ground.
39 Comments
Evan
9/5/2020 06:05:25 pm

Thank you for sharing this project. After reading the article and working through some of the points that confused me, I have the following questions.
1) Would it be correct to say that in this context, ‘rationality’ is interchangeable with ‘predictability’? This would make sense to me. The small exposure I have to this topic (as a student in a different field) has assumed the existence of rational strategic-actors. Rationality is a necessary assumption for analyzing strategy.
2) What role do you assess that intentional/professional dis/misinformation campaigns play in the phenomenon you describe?
3) Could you expand on the definition of polarized in this context? When you say that “when evidence is ambiguous…it can lead rational people to predictably polarize” does this mean it impacts their affiliation with political parties or factions?
4) Do you assess that a two-party electoral system plays a significant role in ambiguous-evidence driven polarization? Have you observed any differences in this phenomenon among populations with parliamentary systems or systems that tend to have more political parties and an emphasis on coalitions? What about in authoritarian systems?
5)
a. If someone reads this paper and takes care to revisit their positions when additional information becomes available, would this insulate them against this phenomenon? If so, could this lead over time to a “polarized” or at least ontologically useful distinction between degrees of self-awareness?
b. In this project, do you see any possibility of a ‘moving goalpost,’ where labels of “irrational” and “rational” are replaced with, “self-aware of the risks of ambiguous evidence” and “not-self-aware of the risks of ambiguous evidence?”

Thank you for considering my questions. I understand they may be addressed in future posts.

Reply
Kevin
9/6/2020 03:40:29 am

Thanks for the great questions! Yeah, a lot of these issues will come up in future posts in more detail, but here's some quick thoughts:

1) I don't mean 'rationality' to mean the same as 'predictability'. I mean 'rationality' to mean something close to 'forming the best beliefs you can, given the available information'. One way to get predictability without rationality, then is when someone has systematic biases---say, they're always more confident than they should be. This'll come up more in week 4.

2) Part of the argument is going to be that the way misinformation campaigns (or media spin) works is by manipulating the ambiguity of people's evidence. The argument is going to be that this is how they can polarize even people who are rationally skeptical of the information. That'll come up a bit in week 3, I think.

3) Yes! Much more on what "polarization" means in week 3. But the basic claim is that people's confident about fixed questions ("Trump is a good president", say) become more extreme over time (Republicans get very conf of it; Dems get very confident it's false). Along with that all sorts of things will happen---liberals and conservatives increasingly "sort" themselves into parties; party members start disliking the other party more and more; etc.

4) Good question! I'm not a huge expert in international comparisons here, but there is evidence that the US has become uniquely polarized in recent decades compared to similar countries (https://www.nber.org/papers/w26669). In general, I think, it's very hard to tell the extent to which the actions and constraints of politicians end up polarizing the country vs. being a symptom of it. But there is definitely an argument that the fact that the US has a system which leads to gridlock when there's disagreement (as opposed to say, Britain, where one party/coalition gets to have their way for 5 years) leads to increasing dissatisfaction with poltics, and that "negative partisanship" (disliking politics, especially the other side) is driving politics in the US more and more. Ezra Kleins "Why We're Polarized" has good stuff on this.

5) It's a good question---I'm really not sure what to think of any effects of coming to be aware of this sort of story. My hope is that seeing political disagreement as due to more rational causes could slow down certain tendencies to demonize the other side, but I sort of doubt it would really have a huge effect. Wrt "moving goalposts", part of what I'll be arguing is that even when you *are* aware of the effects of ambiguous evidence, it will still polarize you---that's part of what's so insidious about it. (It's also one of the more philosophically controversial parts to the argument, so that's a place where some people will want to disagree!)

Thanks again for the questions! Hope these answers are helpful.

Reply
Romeo
9/6/2020 01:15:36 am

From game theoretic simulations comes a striking result: When holding a certain level of cooperative equilibrium as a constant, tolerance varies as a function of density, or how many other players you expect to encounter in a given time interval. Expecting people living in different densities to display the same tolerance is literally asking them to undermine the cooperative equilibrium on which their social milieu depends.

Reply
Kevin
9/6/2020 03:27:29 am

Sounds fascinating---I haven't heard of this result. Could you send a reference? Would love to learn more!

Reply
Peter Gerdes
9/6/2020 03:54:38 am

I don't know if ambiguous evidence can cause people to rationally polarize. I'll have to wait on see on that one but one thing that can't be true is that you are always rational and yet expect your future beliefs to change in a particular direction.

I mean if you knew that you'd be getting evidence tomorrow that will raise your probability of some outcome by such and such amount you'd raise your probability today if you were rational.

It's obviously irrational[1] (in the sense of being vulnerable to a kind of temporal dutch book) for you to know that your belief will change in a particular way tomorrow. It's more complex but for the same considerations about dutch books we have to accept something like the following:


An agent that's rational at every point in time is generally[2] constrained to satisfy the following where E is the expectation operator and little e is the value you'll report is your expectation at some time in the future.

E[X] = E[e(X)]

Note this implies that P(Z) = E[p(Z)]

But your story clearly violates this requirement.

[1]: I'm assuming that you don't lose information between now and tomorrow. Personally, I'd call getting your memories erased an instance of being irrational (it's updating via something other than conditionalization) even though it''s not a normatively blameworthy one but people can reasonably differ on this point.

[2]: If we allow the possibility the agent might not exist (i.e. die) in some possibilities then one has to tweak the claim a little but in the case you mention we can assume the contribution to the expectation value of possibilities in which the agent dies is negligible .

Reply
Peter Gerdes
9/6/2020 04:04:01 am

I'm open to the idea that certain information can cause people to rationally polarize and even that your exposure to such information can be predictable. But you can't rationally predict that your beliefs will change in a specific direction. That's the only aspect of your story I feel clearly shows irrationality.

Reply
Kolja Keller
9/6/2020 07:56:31 pm

That's a good consideration to think through. I've got about four different ways in mind that might respond to that concern.

(A) I think that result might be more of a glitch in the unrealistic assumption that once evidence is possessed it is never lost than a show of irrationality.

Suppose it is possible to "lose" evidence. And suppose that I predict that I will embark on a biased way to gather evidence but will lose evidence about the bias as I go along. If that is correct, then I can predict my beliefs to gradually rationally shift over time, but I would not yet want to change it, as I currently do have the evidence about the bias that I won't have in the future.

(B): Even without that assumption, you could know a direction of belief change without knowing a precise degree, and that seems like it would be rationally possible. Suppose you know that if you stayed in the Midwest you will have evidence that is sample biased for conservative views, and you know that if you go to the coastal elites, you will have evidence that is biased towards liberal views, but you don't know how strong the bias is, and you know that wherever you go, you won't know how strong the bias is going to be compared to the evidence you would have gotten in the other place.

So you can't predict a precise expected credence, and hence any credence other than what is fitting to the rest of your current evidence seems arbitrary and hence not justified. But once you're down one path, you might know that your evidence has been sampled in a biased way, but you don't know by how much. Hence, adjusting your credences in the direction of the evidence you now have seems better than to just keep it the same.

(C) But, (B) might unrealistically ascribe lacking credences that we do have after all. Perhaps you could derive some probability distribution for the possible degrees of bias you predict to be had on each path, and then do some level-crunching to come up with a likely predicted strength vs bias ratio and adjust your credences accordingly. That is, after you go visit the big college on the coast, you have a glimpse of what kind of evidence you might be getting, but this is very much an uncertain guesswork. Your credence that you're correct about these predictions is fairly low. But if you are adjusting your credences away from what much more straightforward evidence seems to support towards something that has shaky predictions about the strength and sample bias of future evidence, and you're doing the latter on really thin evidence, that also doesn't seem to be a particularly rational way to make the prediction.

(D) But maybe it is rational to do that! And here is a last scenario: Suppose you start off with that hunch of what the evidence you will get might support, and how biased the process of obtaining it will be. But because your evidence about all that is quite weak, it only moves your credence a little. You might still, once you go down that path, come across far better evidence about the sample bias and the strength of the evidence found on that path. As you come across that, it seems possible to update your credence even further upwards. But I don't see how you needed to have been irrational at any step of that process.

Peter Gerdes
9/7/2020 04:59:21 pm

Kolja,

Thanks for the reply. I might have to think a bit about some parts of it but the bit that stands out immediately to me is that every one of your examples seemed to assume a degree of bias at some point in time.

Yes, I certainly agree one can rationally believe right now that your probability of such and such will increase *provided* that you accept that you might be irrational in the future. It’s only the constraint that one is rational (indeed I guess it’s technically that you have to know that you will be rational) at all times between now and when your belief is evaluated again.

But that doesn’t change the fact that if you find out that your expected future probability differs from your current probability that reveals some kind of ‘defect’ in the way you are disposed to process information that one can endeavor to reduce. True, you might not want to just directly adjust your probabilities in response. They might be right and what needs adjustment is your disposition towards bias in the future.

But it still seems to mean that you can’t tell this story about perfectly epistemically blameless actors.

To be clear I think this is just kinda pointless nitpicking. I don’t really care what you choose to call rational but I’m very much interested in the substantive mechanisms which differ from the standard account which you seem to foreshadow here.

Peter Gerdes
9/7/2020 05:14:07 pm

Let me just add on part B (where I spoke too quickly as it involves biased evidence but not your bias) you are still violating rationality there. Informally, I’d put the point this way…you can always compensate for a bias you think will be there and that compensation can be made smaller and smaller. There are two options. Either you eventually can find some compensation factor so small that you aren’t sure which direction (if any) you’ll be biased in after that compensation. But now that method of updating is the rational one and not the uncompensated one.

OTOH you might try and claim that no matter how small you make your compensation any compensation will shift you to be biased in the other direction. But then it’s pretty easy to see that the only way that can be true is if there is no expected bias at all in the first place.

So there must be some way of being inclined to update which leaves the equality of expectations point intact.


More formally, you are no longer presuming the pure Baysian ideal of a complete prior probability function which is updated by conditionalization. After all, this Bayesian rationality ideal presumes that you begin with a probability distribution over all (possible?…but at least observably distinguishable) states of the world. As such you never just have a bare belief that the evidence in such and such location is biased to some degree. It always comes along with a probability distribution so you just can’t have those kind of ‘it’s positive but no idea how much answers.

Sure, real people aren’t perfect Bayesians but that’s why being perfectly rational is an idealization we can never achieve.

Kevin
9/8/2020 11:00:06 am

Ha, you hit the nail on the head, Peter. That is *exactly* what this is all about. For it turns out that the sense of "ambiguous evidence" I mean here (higher-order uncertainty; failure of rational credences to be introspective) is (in a sense to be made precise in week 4) necessary and sufficient for failures of that "expectation-matching" principle you mention. In particular, I can be rational to have a given credence 0.5 in Q, and be rational to have an expectation for my future (rational, no-info-loss) credence that is higher in Q.

The best paper on this so far is, in my opinion, Bernhard Salow's paper on "fishing for compliments" (https://academic.oup.com/mind/article-abstract/127/507/691/3037970). He tries to use this result as a reductio of (in my terminology) "ambiguous evidence", but things get much subtler from here---this is basically what my dissertation was about.

In this paper (https://philpapers.org/rec/DORHU) I reinterpret a result originally from Samet to show that whenever evidence is ambiguous, you'll get synchronic vioations of that principle: P(q) ≠ E[(P(q)] for some q, where P is the current rational credence function and E is the current rational expectation function. And this paper (https://philpapers.org/rec/DOREAG) shows that there are large classes of models that allow ambiguous evidence in this way (so violate the expectation-matching principle), but still validate the Value of Evidence, in the sense that you always prefer to gather that evidence to make your decision, no matter your decision problem. A consequence of that result is that you can't be Dutch booked. (How? It turns out the Dutch book theorems implicitly assume that you---or at least the bookie, who can't know any more than you---know what your probabilities before and after the update. But when you have ambiguous evidence, that's precisely when you won't know this.)


That's all pretty compressed and I just saw the time so I have to run, but anyways I just wanted to say that (1) yes, you're completely right that this is the crux of the issue, and (2) the whole project will, in fact, be built around the fact that (I'll argue) rational Bayesians *can* violate the expectation-matching principle you mention, and so can expect that their credences should move in a particular direction upon getting some evidence. (Note: they can't *know* it, since that would indeed violate the value of evidence. But the expectation-mismatch is a stronger constraing, whichdoes fail.)

More in the coming weeks! Will be very curious to hear what you think.

Reply
Peter Gerdes
9/10/2020 03:58:04 pm

That is super interesting and I will stay tuned. I guess I don't really believe there is a fact of the matter about what rationality refers to so I’m fine if someone wants to define these models as in some sense irrational (even if it's just that they got to the word first). I mean we already know we aren't logically omniscient so the real question is which models are more useful in understanding and choosing when to critique individuals and I’m very much open to the idea that the kind of models you are talking about are superior in that regard for the kind of questions we care about in this context.

So looking forward to the details.

Kolja Keller
9/6/2020 07:32:12 pm

I could hardly agree more. It looks like you're bound and determined to write everything I've been thinking about two years earlier than me. I guess here is one small difference in the theoretical apparatus I would want to use: I think that a top-down 'respecting the evidence' that can allow you to rationally believe things that violate probability theory or first order logic does a lot of the work here. I think that might be the same as the role you're ascribing to 'ambiguous evidence'.

I think in terms of political things, it does come down to a reductionist account of testimonial justification: You have more reason to trust the people around you who have been great testifiers about all sorts of things then some far off people who you have some reason to suspect having bad motives.

So you will, rationally, end up trusting more testifiers closer to you, especially on questions of how to interpret evidence. If you've ever gotten into the depths of a data discussion on defense gun use or what the correct correlation to police shootings by race and such are, it often turns out that the data is at least far murkier than the confident proclaimers of each side make it out to be. And then you get down to fairly deep interpretive questions of what the correct background rate is, etc. But since most people don't have the access, time, or skills to re-read all the published papers, or to evaluate possible publication bias, it becomes very easy to just trust the "experts" on your side that supposedly looked at the direct evidence.

Reply
Kevin
9/9/2020 06:47:41 am

Glad to hear we're on the same wavelength! Sorry to be jumping the gun :).

I suspect the apparatus we have in mind is different---all my models are going to be Bayesian, so there won't be any prob-theory or logic violations. I definitely think you're right that it's worth questioning those things as well, but I'm going to try to be as conservative as possible about them for the sake of argument.

I totally agree about the murkiness of the data, and the way our peer group affects our views of it! Part of what I'll be trying to do is give an account of the (Bayesian) mechanisms at play in those processes. Looking forward to hearing what you think of the proposals!

Reply
Adam Gibbons
9/9/2020 03:24:49 pm

Fascinating post, Kevin.

One thing that puzzles me, though, is the following claim:

"given that we can’t believe that our own beliefs are irrational, the irrationalist explanation of polarization falls apart."

I don't see how this follows. It may indeed be true that we can't coherently hold that our own beliefs are irrational. But I would have thought that a natural claim to be made here by defenders of the irrationalist explanation is that people tend to falsely (and often *irrationally*) believe that their own beliefs are NOT irrational. In a predictably partisan and self-serving way, they believe that their political opponents are deeply irrational but that they themselves are not. The irrationalist explanation of the relevant phenomena need not involve people believing that their beliefs are irrational.

Reply
Kevin
9/11/2020 10:51:48 am

Thanks for the question, Adam! I agree this is a natural line for the irrationalist story to take, and certainly there are people who will say that. The point I want to press is that anyone who really internalizes the irrationalist story isn't going to be able to think this *while* holding on to their own political beliefs. If they know the irrationalist story well, then they know that the mechanisms they describe apply equally to all both sides of the political spectrum.

Suppose they hold onto their political beliefs. They either (1) think that those mechanisms explain *their own* political beliefs, or (2) they don't. If (1), then they have to think their own political beliefs are irrational---so, since they hold onto them, their stuck in the position of having the akratic "Trump's a good (bad) president, but I'm irrational to believe that" state. If (2), then since they've internalized the irrationalist story, they'll believe that it's irrational to think that their own political beliefs were formed in some separate way---so again, they'll find themselves in the akratic state, this time, "my political beliefs are rational, but I'm irrational to believe *that*".

So the claim is that if you thoroughly buy the irrationalist story, then you can't hold onto your own political beliefs. But since we all DO hold onto our political beliefs in these contexts, we can't wholeheartedly believe the irrationalist story.

Instead, I want to offer a story where we CAN hang out to our political beliefs, and still wholeheartedly believe the story.

Is that clarifying/convincing, or are you still worried?

Reply
Nick Perry
9/9/2020 03:48:02 pm

Not a philosophical comment at this stage: but are you aware that Scott Adams - author of Dilbert cartoons - writes interestingly on how, in the (non-science) public sphere, notionally rational human beings routinely process apparently the same data to come up with polarised views?
"Smart, well-informed people disagree on nearly all major issues. So being smart and well-informed doesn’t help you grasp reality as much as you would hope. If it did, all of the smart, well-informed people would agree. They don’t ... If your view of the world is that people use reason for their important decisions, you are setting yourself up for a life of frustration and confusion. You’ll find yourself continually debating people and never winning except in your own mind. Few things are as destructive and limiting as a worldview that assumes people are mostly rational."

Reply
Kevin
9/11/2020 10:58:15 am

Cool, I didn't know about Adams's post on this. Thanks for sharing!

In one sense I completely agree, but I want to twist the ending. Much of this project will be about showing how rational processes---ones that people undergoing them can reasonably expect to lead to the truth---don't necessarily lead to correct or converging opinions. So what I'd want to do is swap out "smart, well-informed" with "rational" in the above quote.

When you make that change, the conclusion is very different! For the fact that "smart, well-informed" people disagree is not evidence that they're irrational; it's evidence that the issues their grappling with are HARD and the evidence they have is intractable, noisy, and ambiguous. On that way of seeing things, the mistake is not in assuming that most people are *rational*, it's in assuming that them being rational means their opinions are *right*.

TLDR: The fact that people are often wrong and they often disagree doesn't show that they're irrational---it shows that getting things right is hard, even for rational people.

Reply
Enrique Guerra-Pujol link
9/9/2020 04:03:19 pm

This project sounds very promising, and I am especially intrigued about the nature of "ambiguity" and the Bayesian nature of this fascinating project--specifically, the relation between our political and moral preferences and our credences or "degrees of belief" in political and moral propositions.

Reply
Kevin
9/11/2020 11:02:53 am

Thank you! Yes, there's going to be a lot of subtlety involved at times with translating between Bayesian models of attitudes and our moral/political beliefs and values. This'll probably come up a bit more in the appendix than taking center-stage in the blog, in part because the models I'm using let us slot in claims with any content (whether it's "Trump will win the election" or "It'd be good if Trump won the election") as the things people will polarize on. But do let me know if at certain stages it feels especially relevant!

Reply
Gerald Yeagar
9/9/2020 05:35:56 pm

Do you ever cite political scientists? There is a vast literature about this very topic in the field that actually studies politics.

Reply
Kevin
9/11/2020 11:09:36 am

Of course! See the current version of the technical appendix (https://www.kevindorst.com/reasonably-polarized-technical-appendix.html) for a start---but there'll be many more to come, especially in the empirical sections.

(Sorry if you tried to look at the TA earlier---there was a compiling error and many of the references were showing up as '?'. Thanks for prompting me to realize that!)

Reply
JRS
9/9/2020 06:20:36 pm

"People polarize because they look at information that confirms their beliefs and talk to people that are like-minded. But they do these things not because they are... biased."

Are these sentences saying that people look at information that confirms their beliefs not because they are biased? That sounds like the claim that people don't exhibit confirmation bias. I assume that's not what you're saying. But if you do think people exhibit confirmation biases, I don't think I get how you distinguish your position from irrationalism.

Reply
Kevin
9/14/2020 10:24:19 am

Great question! More on it in later posts, in part because the answer is subtle. I think it depends on a terminological choice for what we mean by "confirmation bias", and whether exhibiting it implies irrationality.

If confirmation bias implies irrationality, I'd say that confirmation bias is much rarer than we think, and the studies claiming to demonstrate it often do no such thing. If it doesn't, then I'd say confirmation bias *is* common---but commonly rational.

Does that help?

Reply
JRS
9/14/2020 12:06:54 pm

Thanks for the response. And yes, this helps.

I would be inclined to treat the phrase 'confirmation bias' as a theoretical term defined by its use in research: what observational predictions does it help psychologists make? How is it connected to other theoretical concepts of psychology? I suspect that this would give us something like: confirmation bias is the tendency to seek out evidence that confirms our beliefs, *because* it confirms our beliefs.

Does that count as implying irrationality? Or were you thinking of something like literally building the term 'irrationality' into the definition?

So I'm now reading your disagreement with the irrationalist as a disagreement about something like how much variation in polarization is explained by different factors. You say Bayesian updating on ambiguous information explains way more than does confirmation bias of the kind I described. The irrationalist says confirmation bias of the kind I described explains a lot more of the polarization we see.

I was reading the passage I quoted from your post as saying people don't exhibit irrational confirmation bias (at all). (Because, for example, there's some issue with the very concept of confirmation bias.) I'm not an expert, but I would think that this view would be hard to square with the entirety of psychological research on this topic. By contrast, there does seem to be a debate about the extent of confirmation bias in the psychological literature.

Kevin
9/16/2020 08:42:20 am

Yeah definitely! Jess Whittlestone's recent dissertation (http://wrap.warwick.ac.uk/95233/1/WRAP_Theses_Whittlestone_2017.pdf) is the best thing I've seen on this (here's a blog post where she summarizes it quickly [https://jesswhittlestone.com/blog/2018/1/10/reflections-on-confirmation-bias]---definitely worth a read). On her account, there's a far amount of unclarity on what exactly confirmation bias is, even within the relevant literature, so I suspect it's hard to get a clear and uncontroversial definition.

But yours seems like as good a stab as any! Whether it counts as irrational on that definition probably will depend on how exactly we see the causality running and how fine-grained we want to get about our judgments of causation. On the picture I'll give, the fact that you have certain beliefs (or: evidence for those beliefs) can make it so that the evidence you get in favor of those beliefs is less ambiguous than the evidence you get against them, which in turn can cause you to (rationally) raise your confidence in those beliefs. Does that count as "confirming your beliefs because it confirms out beliefs"? In one sense, no---it's confirming your beliefs because of the asymmetry in ambiguity. But in another sense, yes, since it's the fact that you had those beliefs which caused the asymmetry in ambiguity. Does that make sense?

That's why I'm inclined to say things are really subtle here, and I'm not entirely sure what the best way to respond to your initial question.

But overall, I would definitely want to go the way you suggest: irrational confirmation bias is surely a thing (there's nothing incoherent about it, and there's empirical evidence for it), but it's much LESS of a factor than we commonly think, since asymmetries in ambiguity play a large role in explaining the empirical evidence that's usually taken to support confirmation bias.

How's that sound?

Adrian Bardon link
9/9/2020 07:33:23 pm

Looks good, Kevin. If you are looking for relevant work in social science I have a whole book on the subject, but you could start with the links in this article: https://theconversation.com/coronavirus-responses-highlight-how-humans-are-hardwired-to-dismiss-facts-that-dont-fit-their-worldview-141335

Reply
Kevin
9/14/2020 10:25:56 am

Fantastic, thanks Adrian! Will check out the article, and add the book (this one? https://global.oup.com/academic/product/the-truth-about-denial-9780190062279?cc=gb&lang=en&) to my reading list!

Reply
Guy Crain
9/10/2020 11:16:51 am

I'm curious to see what you say about specific heuristics. Take motivated reasoning for example.

(1) I take it that the phenomena is defined by arguing/reasoning in a way that tracks preferred beliefs but not necessarily true ones. Do you aim to tinker with the definition of "rationality" to say that sometimes failing to aim for accuracy is rational?

(2) What do you think of Ditto and Lopez Motivated Skepticism paper? In Study 2, they seem to find evidence of motivated reasoning about personal health, but they admit that there's an alternative Bayesian explanation for people's behavior--base rates of sickness are far lower than health. Then in Study 3, they alter the experimental design to show that people really were chasing their preferences and not just being intuitive Bayesians. Part of the reason I bring this up is because it seems that even if a lot of what cog psy has labelled biases or heuristics could be explained by Bayesian moves, that's not the same as showing that the Bayesian moves *are* the actual moves people are making, no?

Reply
Kevin
9/16/2020 08:57:46 am

Thanks for the questions! I hadn't read the Ditto and Lopez paper, so thanks for adding that one to my list. Skimming it, it *looks* like the general arguments I'm going to give will apply to it as well, but will of course have to look at the details later.

I'm not going to try to divorce rationality from accuracy-conducive processing, but there are a couple interesting possibilities that open up once ambiguous evidence is on the table.

The first, and most straightforward, is that often it's going to maximize expected accuracy to do what I'll call *confirmatory searches*––cognitive searches for bits of information confirming our prior beliefs, rather than those for bits of information disconfirming them. I'm thinking this will apply to Ditto and Lopez's study insofar as people have a prior that they are in good health, and so it'll maximize expected accuracy to scrutinize negative test results more than positive test results. I'm not 100% sure at this point what to say if it turns out that people do this even when their prior was that they would get a negative result, but they prefer to get a positive one. (Do you know if the Ditto study (or another one) cleanly separates those issues?) But I'm inclined to think that in at least some of these cases, the same mechanisms will kick in.

The second point is that people will often be faced with a choice between two routes of cognitive processing, one of which they expect to raise their confidence in P, the other of which they expect to lower it. In the sorts of situations I'm going to be introducing, BOTH strategies are expected to make their beliefs about P (and everyone else) more accurate. Now suppose, because they'd be happier if they belief P, they decide to pursue the former strategy. Does that make them irrational? They are not increasing their confidence in P *at the cost* of being accurate---they rationally expect it'll make them more accurate, even though they (also) rationally expect (but do not know) it'll make them more confident. I'll make the argument that in such a case they're still being rational, even though non-epistemic, motivational factors are influencing their belief-forming mechanism.

More on this soon! I think the motivated reasoning stuff is some of the most interesting and challenging for the ambiguous-evidence picture, so will be curious to hear what you think when the post on it comes along.

Reply
maxim smyrnyi
9/11/2020 08:55:51 am

I wanted to add the following bits of evidence on "irrationality" side. General idea being that things like "systemic racism" are extremely speculative, currently barely testable hypotheses which nonetheless play a huge role in some people's *moral* (not necessarily scientific), economy. They are candidates for "best explanation" for their experiences or experience they are told other people are having. Just like moral convictions can be fostered by selective exposure to data (by biasing the data you receive), support for the empirically sounding theories of blame can be boosted by propaganda campaigns. "Rational path dependency" here is "rational exposure dependency". Just like marketing can have effects on those who watch TV more than on those who dont, propaganda has stronger effects on those who are more exposed. By choosing where you live and what jobs you hope to get, you change the patterns of exposure. It is important that most of the"big" beliefs are ultimately untestable: they are under-determined by data projections into the future. For me that means that no, you cannot settle who is right by just looking at the data, the fact that people fail to do that is rational land to be expected.

At any rate, Zach Goldberg documented some of way in which polarization was manufactured with the help of major media corporations

How the Media Led the Great Racial Awakening
Years before Trump’s election the media dramatically increased coverage of racism and embraced new theories of racial consciousness that set the stage for the latest unrest

https://www.tabletmag.com/sections/news/articles/media-great-racial-awakening


America’s White Saviors
White liberals are leading a ‘woke’ revolution that is transforming American politics and making Democrats increasingly uneasy with Jewish political power
https://www.tabletmag.com/sections/news/articles/americas-white-saviors

Democrats Are Turning Immigration Into a Moral Ultimatum
Why have attitudes on immigration gotten so much more radical in recent years? Hint: It’s not just Trump.

https://www.tabletmag.com/sections/news/articles/democrats-immigration-moral-ultimatum

Reply
Kevin
10/3/2020 12:39:06 pm

Why do you think these claims are evidence of irrationality? Though I'd disagree with you on the details of particular campaigns, I totally agree that the media manufactures polarizations through the way it presents evidence. The argument will be that the way such manufactured polarization works is by making it *rational* for individuals to polarize—e.g. liberals see clear evidence in favor of liberal views and ambiguous evidence against them; vice versa for conservatives.

Reply
Tiago Santos link
9/11/2020 11:43:20 am

Great post. I do wonder how you would respond to the view of Bryan Caplan, that most of the issues you and Becca would drift apart are subject to rational irrationality - that is, it is perfectly rational for both of you to conform to your social environment ignoring the potential irrationality of your beliefs. You may argue that it would be irrational to think that your beliefs are correct _and_ that in general political beliefs are irrationally determined by the person's environment. I would respond that while irrational thoughts cannot be true, people are still perfectly capable of holding them...
Congratulations on the project.

Reply
Kevin
10/3/2020 10:28:25 am

Thanks Tiago! Sorry to be slow on the reply here. It's a good question, and one that I just tried to address in the beginning of a post from last week. (https://www.kevindorst.com/stranger_apologies/what-is-rational-polarization)

I think I would understand Caplan's view as the claim that polarization can result from *practically* rational causes—that it can be in your interest (say, to get approval from your friends) to have certain political beliefs and so you adopt them. I think that's definitely a possibility and one that is not hard to see how it goes. But the argument I'm trying to make is that the beliefs are, in the main, *epistemically* rational in the sense that they are warranted by people's evidence.

In my reply to Rafal's comment on that post I linked, I tried to say why it's epistemic rationality I think we should care about. In short, the claim is that whether we think the other side is epistemically rational to believe as they do is what determines whether we demonize them for holding their beliefs.

Reply
James Abraham
9/12/2020 10:01:04 am

This stuff is embarrassing. It's like someone publishing a physics theory which gives the wrong equations, predicts that an electron is the size of a star, gives no juice for the squeeze.

Your theory gives the wrong predictions about the very phenomena that it seeks to explain.

Trump supporters are not rational actors unless their goal is one of annihilating all moral values -- values both traditional and humanist. His tweets show a narcissistic maniac who is as unpresidential as the Joker. He behaves, quite literally, like a child bully. He is not just a pathological liar but lazy, ignorant, easily distracted. In no way can his supporters be taken as "reasonable people doing the best they can with the information they have".

Your argument against the irrationalist view is superficial.

For one thing, people are entirely capable of being irrational even after admitting that they might be so.

How does it work? Because people compartmentalise. They become indifferent to the truth, losing themselves in the fantasy out of indulgence or expedience. They forget their initial objections or develop ingenious arguments to muddy the water enough for their initial objections to be less embarrassing.

So the irrationalist model is quite valid. The people who write about it are usually liberals, but there is no inconsistency. Plenty of liberals criticise the excesses on their own side.

You don't find that degree of self-criticism among Trump's followers, any more than you found it among those of any tinpot dictator.

Reply
Kevin
10/3/2020 10:33:41 am

Thanks for the objections! I think you might be latching on to the wrong piece of my argument that we shouldn't blame polarization on irrationality. The premise is that YOU can't think that YOUR beliefs about Trump are irrational. I'm sure you don't, given the strength of your convictions. That's the philosophical premise, and it's one I take it you agree with.

The next premise is that people on both sides of politics form their political beliefs through the same mechanisms. This isn't a philosophical premise—it's a robustly confirmed psychological result.

The first premise establishes that you think you're belief about Trump is rational. The second one establishes that your belief about Trump is rational if and only if the other side's (opposing) belief about Trump is rational—after all, you both formed them through the same mechanisms. It follows from these two that you should think that they're beliefs are rational, too.

I'm guessing you'll object to the psychological premise that both sides form their beliefs through the same mechanisms. Obviously we could go on about this for a long time, but I've tried to give a more detailed version of the argument (with more citations) here: https://arcdigital.media/the-other-side-is-more-rational-than-you-think-2137349204c7

Reply
Nas link
10/18/2020 03:11:42 pm

Hi Kevin,

I really appreciate your perspective on the issue of polarization. It's an area I'm deeply concerned about. I sent you a private email but it looks like you are active in the comments here, so I'm just placing a note to check your email for something that might be relevant to your research. Thanks,
- Nas

Reply
Martina Calderisi
4/15/2021 05:41:13 pm

Dear Professor Dorst,

at some point, you have claimed that (I quote): "It's incoherent to believe that your own beliefs are irrational", by making reference to so-called Non-Akrasia Constraint (NAC).

I would like to know whether you support some sort of wide-scope NAC, according to which (roughly): (for all p and for all a), R[Ba(p) -> non-Ba(non-RBa(p))], where p stands for "proposition", a stands for "agent", R stands for "it is rational that", Ba stands for "a believes that", -> stands for "if...then..."; or you support some sort of narrow-scope NAC, according to which (roughly): (for all p and for all a), Ba(p) -> non-RBa(non-RBa(p)).

I heartily thank you in advance for your answer.

(I have just sent you an e-mail containing the very same question! Thank you again.)

Reply
Kevin
4/17/2021 09:37:41 am

Email sent! Thanks a bunch for your question; curious to see the paper when it's ready!

Reply
Joshua
9/26/2022 10:00:47 am

Kevin -

I wasn't to look at some other aspects of your post - but just quickly wanted to say that the view that polarization is exploding may be worthy of some investigation (if you haven't already investigated).

Is polarization greater than the hard hats versus the hippies Era? Than when large numbers took to the streets to fight for unions against company goons? During Mccarthyism?

It certainly feels like polarization has exploded but that could be for at least two reasons that don't exactly translate to increased polarization. The first (as Ezra Klein argues) is it may be that polarization, comparatively speaking, is broken down more uniformly along party lines, which means that the polarization manifests as greater dysfunction politically within our government - making the polarization more concrete in its effect. We could maybe see this in how there used to be more diversity within political parties. The second would be that social media makes the polarization more visible, particularly to those who participate in social media (and even more so to those who participate in the more polarized segments of social media.

Polls show that partisan antipathy has increased, but that doesn't necessarily translate to increased polarization in some absolute sense.

In some ways, this is all orthagonal to your main thesis (which I hope to write another comment on), but to the extent that it might be directly relevant, I think the basic question of how to define polarization, and thus how to measure it, could be an important step in interrogating your thesis.

Reply



Leave a Reply.

    Kevin Dorst

    Philosopher at MIT, trying to convince people that their opponents are more reasonable than they think

    Quick links:
    - What this blog is about
    - ​Reasonably Polarized series
    - RP Technical Appendix

    Follow me on Twitter or join the newsletter for updates.

    RSS Feed

    Archives

    April 2021
    March 2021
    October 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020

    Categories

    All
    All Most Read
    Conjunction Fallacy
    Framing Effects
    Gambler's Fallacy
    Overconfidence
    Polarization
    Rationalization
    Reasonably Polarized Series

  • Bio
  • Research
  • Teaching
  • Public Philosophy
  • Stranger Apologies