KEVIN DORST
  • Bio
  • Research
  • Teaching
  • Public Philosophy
  • Stranger Apologies

Stranger Apologies

What is "Rational" Polarization?

9/25/2020

10 Comments

 
(2300 words, 12 minute read.)

So far, I've (1) argued that we need a rational explanation of polarization, (2) described an experiment showing how in principle we could give one, and (3) suggested that this explanation can be applied to the psychological mechanisms that drive polarization.

Over the next two weeks, I'll put these normative claims on a firm theoretical foundation. Today I'll explain why ambiguous evidence is both necessary and sufficient for predictable polarization to be rational. Next week I'll use this theory to explain our experimental results and show how predictable, profound, persistent polarization can emerge from rational processes.

With those theoretical tools in place, we'll be in a position to use them to explain the psychological mechanisms that in fact drive polarization.

​So: what do I mean by "rational" polarization; and why is "ambiguous" evidence the key?

It’s standard to distinguish 
practical from epistemic rationality. Practical rationality is doing the best that you can to fulfill your goals, given the options available to you.  Epistemic rationality is doing the best that you can to believe the truth, given the evidence available to you.

It’s practically rational to believe that climate change is a hoax if you know that doing otherwise will lead you to be ostracized by your friends and family. It’s not epistemically rational to do so unless your evidence—including the opinions of those you trust—makes it likely that climate change is a hoax.

My claim is about epistemic rationality, not practical rationality. Given how important our political beliefs are to our social identities, it’s not surprising that it’s in our interest to have liberal beliefs if our friends are liberal, and to have conservative beliefs if our friends are conservative. Thus is should be uncontroversial that the mechanisms that drive polarization can be practically rational—as people like Ezra Klein and Dan Kahan claim.

The more surprising claim I want to defend is that ambiguities in political evidence make it so that liberals and conservatives who are doing the best they can to believe the truth will tend to become more confident in their opposing beliefs.

To defend this claim, we need concrete theory of epistemic rationality.

The Standard Theory

The standard theory is what we can call unambiguous Bayesianism. It says that the rational degrees of confidence at a time can always be represented with a single probability distribution, and that new evidence is always unambiguous, in the sense that you can always know exactly how confident to be in light of that evidence. 
​
Simple example: suppose there’s a fair lottery with 10 tickets. You hold 3 of them, Beth holds 2, and Charlie holds 5.  Given that information, how confident should you be in the various outcomes? That's easy: you should be 30% confident you’ll win, 20% confident Beth will, and 50% confident Charlie will.

Now suppose I give you some unambiguous evidence: I tell you whether or not Charlie won. Again, you’ll know exactly what to do with this information: if I tell you he won, you know you should be 100% confident he won; if I tell you he lost, that means there are 5 tickets remaining, 3 of which belong to you—so you should be 3/5 = 60% confident that you won and 40% confident that Beth did.

In effect, unambiguous Bayesianism assimilates every case of information-gain to a situation like our lottery, wherein you always know what probabilities to have both before and after the evidence comes in.

This has a surprising consequence:
Fact 1. Unambiguous Bayesianism implies that, no matter what evidence you might get, predictable polarization is always irrational.  
(​The Technical Appendix contains all formal statements and proofs.)

In particular, consider me back in 2010, thinking about the political attitudes I’d have in 2020. 
Unambiguous Bayesianism implies that no matter what evidence I might get—no matter that I was going to a liberal university, for instance—I shouldn’t have expected it to be rational for me to become any more liberal than I was then.

Moreover, Fact 1 also implies that if me and Becca shared opinions in 2010, then we couldn't have expected rational forces to lead me to become more liberal than her. 

Why is Fact 1 true—and what does it mean?

Why it’s true: Return to the simple lottery case. Suppose you are only allowed to ask questions which you know I’ll give a clear answer to.  You’re currently 30% confident that you won. Is there anything you can ask me that you expect will make your more confident of this?  No.

You could ask me, “Did I win?”—but although there’s a 30% chance I’ll say ‘Yes’, and your confidence will jump to 100%, there’s a 70% chance I’ll say ‘No’ and it’ll drop to 0%.  Notice that (0.3)(1) + (0.7)(0) = 30%.

You could instead ask me something that’s more likely to give you confirming evidence, such as “Did Beth or I win?”  In that case it’s 50% likely that I’ll say ‘Yes’—but if I do your confidence will only jump to 60% (since there’ll still be a 40% chance that Beth won); and if I say ‘No’, your confidence will drop to 0%. And again, (0.5)(0.6) + (0.5)(0) = 30%.

This is no coincidence. Fact 1 implies that if you can only ask questions with unambiguous answers, there’s no question you can ask that you can expect to make you more confident that you won. And recall: unambiguous Bayesianism assimilates every scenario to one like this.

What it means: Fact 1 implies that if unambiguous Bayesianism is the right theory of epistemic rationality, then the polarization we observe in politics must be irrational.  

After all, a core feature of this polarization is that it is possible to see it coming. When my friend Becca and I went our separate ways in 2010, I expected that her opinions would get more conservative, and mine would get more liberal. Unambiguous Bayesianism implies, therefore, that I must chalk such predictably polarization up to irrationality.

But, as I’ve argued, there’s strong reason to think I can’t chalk it up to irrationality—for if I’m to hold onto my political beliefs now, I can’t think they were formed irrationally.

This—now stated more precisely—is the puzzle of predictable polarization with which I began this series.
(1300 words left)

Ambiguous Evidence

The solution is ambiguous evidence.

Evidence is ambiguous when it doesn’t wear its verdicts on its sleeve—when even rational people should be unsure how to react to it. Precisely: your evidence is ambiguous if, in light of it, you should be unsure how confident to be in some claim.  

(More precisely: letting P be the rational probabilities to have given your evidence, there is some claim q such that P(P(q)=t)<1, for all t. See the Technical Appendix.)

Here is the key result driving this project:
Fact 2. Whenever evidence is ambiguous, there is a claim on which it can be predictably polarizing.
In other words, someone who receives ambiguous evidence can expect it to be rational to increase their confidence in some claim. Therefore, if two people will receive ambiguous evidence, it's possible for them to expect that that their beliefs will diverge in a particular direction.

As we saw in our experiment—and as I’ll explain in more depth next week—this means that ambiguous evidence can lead to predictable, rational shifts in your beliefs.

Without going into the formal argument, this is something that I think we all grasp, intuitively. Consider an activity like asking a friend for encouragement. For example, suppose that instead of a lottery, you were competing with Charlie and Beth for a job. I don’t know who will get the offer, but you’re nervous and come to me seeking reassurance. What will I do?

I’ll provide you reasons to think you will get it—help you focus on how your interview went well, how qualified you are, etc.

Of course, when you go to me seeking reassurance you know that I’m going to encourage you in this way. So the mere fact that I’m giving you such reasons isn’t, in itself, evidence that you got the job.  Nevertheless, we go to our friends for encouragement in this way because we do tend to feel more confident afterwards. Why is that?

If I’m a good encourager, then I’ll do my best to make the evidence in favor of you getting the position clear and unambiguous, while that against you getting it unclear and ambiguous. I’ll say, “They were really excited about you in the interview, right?”—highlighting unambiguous evidence that you’ll get the job.  And when you worry, “But one of the interviewers looked unhappy throughout it”, I’ll say, “Bill? I hear he’s always grumpy, so it’s probably got nothing to do with you”—thus making evidence that you didn’t get the job more ambiguous and so weaker.  On the whole, this back-and-forth can be expected to make you more confident that you’ll get the job.

This informal account is sketchy, but I hope you can see the rough outlines of how this story will go.  We’ll return to filling it out in the details in due course.

But before we do that, there’s a more fundamental question we need to ask.

I’ve introduced the notion of ambiguous evidence and proved a result connecting it to polarization. But how do we know that the models of ambiguous evidence which allow for predictable polarization are good models of (epistemic) rationality? Unambiguous Bayesianism has a distinguished pedigree as a model of rational belief; how do we know that allowing ambiguous evidence isn’t just a way of distorting it into an irrational model?
(700 words left)

The Value of Rationality

This can be given a precise answer in terms of the value of evidence.

What distinguishes rational from irrational transitions in belief? It’s rational to update your beliefs about the lottery by asking me a question about who won.  It’s irrational to update your beliefs hypnotizing yourself to believe you won. Why the difference?

Answer: asking me a question is valuable, in the sense that you can expect it to make your beliefs more accurate, and therefore to improve the quality of your decisions. Conversely, hypnotizing yourself is not valuable in this sense: if right now you’re 30% confident you won, you don’t expect that hypnotizing yourself to become 100% confident will make your opinions more accurate—rather, it’ll just make you certain of something that’s likely false!

This idea can be made formally precise using tools from decision theory. Say that a transition in beliefs is valuable if, no matter what decision you face, you prefer to make the transition before making your decision, rather than simply making your decision now.  (See the Technical Appendix for details.)

To illustrate, focus on our simple lottery case. Suppose you’re offered the following:
Bet: If Charlie wins the lottery, you gain 5 dollars; if not, you lose 1 dollar.
Since Charlie is 50% likely to win, this is a bet in your favor.

Would you rather (1) decide whether to take the Bet now, or (2) first ask me a question about who won, and then decide whether to take the Bet?  

Obviously the latter. If you must decide now, you’ll take the Bet—with some chance of gaining 5 dollars, and some chance of losing 1 dollar. But what if instead you first ask, “Did Charlie win?”, before making your decision? Then if I say ‘Yes’, you’ll take the Bet and walk away with 5 dollars; and if I say ‘No’, you leave the Bet and avoid losing 1 dollar.

In short: asking a question allows you to keep the benefit and reduce the risk. That's why asking questions is epistemically rational.


Contrast questions with hypnosis. Would you rather (1) decide whether to take the Bet now, or (2) first hypnotize yourself to believe that you won, and then decide whether to take the Bet?  

Obviously you’d rather not hypnotize yourself. After all, if you don’t hypnotize yourself, you’ll take the Bet—and it’s a bet in your favor.  If you do hypnotize yourself, the Bet will still be in your favor (it’s still 50% likely that Charlie will win); but, since you’ve hypnotized yourself to think that you won (so Charlie lost), you won’t take the bet—losing out on a good opportunity.

All of this can be generalized and formalized into a theory of epistemic rationality:
Rationality as Value: epistemically rational belief-transitions are those that are valuable, in the sense that you should always expect them to lead to better decisions.
There are two core facts this theory gives us.

First: if we assume that evidence is unambiguous, then Rationality as Value implies unambiguous Bayesianism:
Fact 3. Rationality as Value implies that, when evidence is unambiguous, unambiguous Bayesianism is the right theory of epistemic rationality.
Thus our theory of epistemic rationality subsumes the standard theory as a special case. In particular, it implies that when evidence is unambiguous, predictable polarization is irrational.

Second: once we allow ambiguous evidence, predictable polarization can be rational:
Fact 4. There are belief-transitions that are valuable but contain ambiguous evidence—and which, therefore, are predictably polarizing.
Fact 4 is the foundation for the theory of rational polarization that I’m putting forward. It provides a theoretical “possibility proof”, to complement our empirical one: it shows that, when evidence is ambiguous, you can be rational expect it to lead you to the truth, despite expecting it to polarize you.  This is how we are going to solve the puzzle of predictable polarization.

In particular, it turns out that we can string together a series of independent questions and pieces of (ambiguous) evidence with the following features:
  • Relative to each question, the evidence you’ll receive is valuable and yet (slightly) predictably polarizing;
  • Yet relative to the collection of questions as a whole, the evidence is predictably and profoundly polarizing.

This, I’ll argue, is how predictable, persistent, and profound polarization can be rational—how, back in 2010, Becca and I could predict that we'd come to disagree radically without predicting that either of us would be systematically irrational.  

What next?
The formal details and proofs of Facts 1 – 4 can be found in the Technical Appendix.

​If you liked this post, consider signing up for the newsletter, following me on Twitter, or spreading the word.
Next Post: an argument that the predictable polarization observed in our word-completion experiment was rational, and an explanation of how predictable, profound, persistent polarization can arise rationally from ambiguous evidence.
​​​
10 Comments
Rafal
9/27/2020 04:32:52 pm

You: “Sure, people appear to be characterized by practical rationality. But what if we assume that they are characterized by epistemic rationality instead? Well, then we run into a puzzle: logic and evidence suggest that they are not rational this way. But wait, if we also assume ambiguity, the puzzle is solved!”

Me (wielding Occham’s razor the size of a bastard sword): “So how about just settling on practical rationality?”

More seriously: practical rationality seems very “rational” to me because it helps me get the things I want (like feeling good among my friends, for example, or rationalizing my nasty attitude towards philosophers - well, “they are all liars or that’s what fox news said anyway”). So I feel the burden of proof that people “are doing the best they can to believe the truth” falls squarely on you. Sure, I believe that such people exist (there are already two of us), but what makes you believe many (the majority of) people are equally irrational to be epistemically rational? Because this is what you need to argue first to prove that the theory you develop is empirically relevant.

And just to be sure: While I don’t yet grasp all the implications of ambiguity, I intuitively feel that your propositions (“facts”) may well ‘mathematically’ hold (meaning that they are internally consistent given assumptions). I’m just not convinced you need this machinery at all, given that practical rationality is simpler (Occham’s razor!) and seems to be already doing the job.


Reply
Kevin
10/3/2020 09:52:45 am

Thanks for the question! It's a good one.

My basic argument that it's epistemic rationality that really matters is that, I think, whether or not we demonize someone for doing as they do depends on whether we think they are being epistemically rational. Suppose you think voting for Trump is a bad thing. If I tell you Jill voted for Trump bc it's in her interest (was practically rational), then I'm betting your reaction will be "sure, that's 'rational' in the sense that bad and evil people can be rational—I blame Jill for doing that". On the other hand, if I told you it was *epistemically* rational for her to believe Trump would make a better president than Biden (and you believed that conclusion), then I think it's much more natural to say, "Wow, how did she get so misled? What processes are leading her to have such different information than I do?" There's more of push toward understanding.

This can be buttressed with a (too) simple theory of blameworthiness: a person is blameworthy for doing action A if and only if they should've expected some other action B to have a better outcome. This theory is obviously not exactly right, but the contours of it are, I think. We don't blame people when they do something that has a bad outcome if they had every reason to think that it would have a good one, for instance.

So that's, ultimately, why what I think we should care about is epistemic rationality—because that's what goes with blame, and blaming the other side for disagreeing with us is one of the main drivers of affective polarization / demonization.


Of course, I totally agree with you that it's much HARDER to establish that these disagreements are epistemically rational than that they are practically so. But that, I think, is why we need the machinery of the theory.





Reply
Rafal
10/12/2020 05:09:42 pm

Hi Kevin, thank you for your response!

I view *epistemic* and *practical* rationality as alternative social norms. I believe that there indeed are people who adopted and internalized epistemic rationality as a norm (at least to some degree). However, I do not believe that this norm is as common as your arguments would require. So what I believe would be really needed to convince people like me to your theory is not necessarily the machinery you are building (this is only the next step) but strong evidence that people and social groups really are epistemically rational (in the sense of having commonly adopted this norm).

Note that I believe that this would require a different type of experiment that you described in an earlier post. That experiment shows that people’s beliefs will fork out when subjected to differentiated information. This is not surprising, but is also not enough. There will be people who vote differently even when they share common beliefs, because they care about different things (this is fully consistent with practical rationality and thus my point earlier that epistemic rationality does not seem to pass the Occam razor test).

This is not essential to my arguments above, but here is where I’m coming from: my intuition against epistemic rationality being a commonly adopted norm is based on the view that evolutionary psychology and cultural evolutionary theory offer a good explanation of human norms, values and choices. Practical rationality naturally fits this viewpoint (using the terminology used in those fields, you can easily see how it could be *adaptive*). It is much more difficult to see how *epistemic* rationality could be adaptive. Conversely, it is easy to come up with examples pointing towards this norm being maladaptive (potentiaslly all cases when acting in line with epistemic rationality means acting against practical rationality). But I do not think you need to subscribe to the evolutionary psychology viewpoint to question whether epistemic rationality norm is common!

Rafal
10/12/2020 05:22:22 pm

At the same time, the more I'm thinking about it, the more I feel that practical rationality may not be a full answer. That is what makes me so interested in your research.

Practical rationality has one weak point: if we accept that this is our (only) relevant norm, it is difficult to be mad about anyone voting differently than us (without being a hypocrit, that is).

But I'm not convinced (yet) that it's epistemic rationality that is the answer (for reasons I explained in the other post).

Kevin
10/14/2020 08:58:20 am

Thanks Rafal! I see where you're coming from here, and it's not a position I had super well articulated in my head before, so it's quite helpful to have it out there.

One immediate thought I have that may be relevant: I would be inclined to say that the evolutionary story for how people came to be epistemically rational is the *same* as the evolutionary story for how they came to be practically so. They basic point is that practical rationality tells you how to pursue your goals, given your beliefs. But if your beliefs are radically/systematically divorced from the truth (or more accurate beliefs you could've had), then that's not much help. The point of epistemic rationality is to get a connection between your beliefs and the truth—and thereby make practical rationality useful wrt selective advantage. Put in a too-blunt slogan: it pays to be epistemically rational because true beliefs are useful!

Rafal
11/8/2020 11:47:22 am

Hi Kevin, the first part of this article will give you an idea why epistemic rationality is unlikely from the evolutionary perspective. But the article is worth reading till the end, as the other book it discusses comes much closer to your persective. So some convergence may be possible?

https://www.newyorker.com/magazine/2017/02/27/why-facts-dont-change-our-minds

Matt Vermaire
9/28/2020 10:04:37 am

Thanks for this, Kevin! I admire your writing, and am really interested in the project.

I'm curious about the role you're giving to ambiguity. Your definition of ambiguous evidence here, if I read it right, makes it a matter of higher-order uncertainty (which I think is probably a feature of just about all our credences, even if—as I think Titelbaum's Fixed Point Thesis has it—it wouldn't be for a fully rational agent). But I don't see how this maps on to your examples, like the case of the job interview. You say there that you, as a good encourager, are trying to highlight unambiguous evidence that I'll get the job, like the fact that they were really excited about me, and cast the forboding evidence (that Bill looked unhappy) as ambiguous. There's a kind of intuitive sense of "ambiguous" for which that makes sense: the interviewers' excitement _straightforwardly_ supports the job-getting hypothesis, while Bill's grumpiness is made to look less univocal. But isn't that quite a different matter from ambiguity in your technical sense? It may still be that neither piece of evidence, or my total evidence, supports certainty in higher-order normative judgments. (E.g., I have no idea just precisely how confident I should be I'll get the job, conditional on the interviewers' being excited.)

Reply
Kevin
10/3/2020 10:01:47 am

Thanks Matt! Yes, you're totally right, and I think it's a very good worry to raise about whether the technical notion of ambiguity I'm using maps onto the intuitive one. This is a really crucial piece of the project, and one I'm still working on fleshing out in more detail. A couple thoughts:

1) One of my strategies is to focus on certain toy cases (especially the word-completion task case) where I think that I *can* make a strong argument that higher-order uncertainty is in play. From there, I'll work my way out by showing how similar evidential structure (in the form of "cognitive search") are all over the place—stay tuned for some of the posts on empirical mechanisms that drive polarization, like "biased assimilation of evidence". So this answer is just to say "yeah, you're right it's an issue, but I think some of the arguments to come will help allay it".

2) My second strategy is just to argue that I think the intuitive notion of "ambiguity" in evidence actually maps on pretty well to when people have higher-order uncertainty. To take your example, I'd want to claim that it is precisely when evidence is "straightforward" that we don't have much doubt about how to react to it, whereas when evidence is "less univocal", we do. One way to make this argument more precise is to point out that one route to higher-order uncertainty is by being uncertain how to trade off various pieces of evidence that pull in opposite directions. (Some work on imprecise credences draws this intuition out quite well, e.g. Miram Schoenfield's paper "Chilling out on epistemic ratinlaity", https://link.springer.com/article/10.1007/s11098-012-9886-7, though I'd argue that the thing they're drawing out is better understood as evidence warranting higher-order uncertainty rather than mushy credences.) For example, if a trustworthy friend tells you the party starts at 5pm, you know that you should be quite confident that it starts at 5pm. If another tells you, with the same confident, that it starts at 6pm, I'd say you *don't* know that you should now be 50-50 on whether it starts at 5 or 6pm, since you should be unsure whether you should trust your friends equally. So, I'd argue, conflict within your bits of evidence is a good proxy for higher-order uncertainty.

Was that convincing at all? Curious to hear what you think!

Reply
Matthew Vermaire
10/3/2020 01:10:10 pm

I'll stay tuned! It seems right that a lot of the interesting argumentative work will be done in connecting the formal models with the disagreement on the ground. If it turns out that "cognitive search", for instance—a cool notion—is widespread there, that could allay the worry.

I don't know about the second strategy, though I haven't read that Schoenfield paper, and ought to. I would have thought that I'm pretty much always uncertain about the correct *precise* credence I should have in things like the party's being at 5 PM, even if it's true that I can know more coarse-grained normative facts, like that I should be "quite confident." When I get conflicting testimony from friends, in your case, it seems to me I just *still* be unsure about the precise credence, though I may be in a position to know a different coarse-grained normative fact—like that I should have a "middling" credence in 5 PM vs. 6 PM. So, a lack of conflicting evidence doesn't yet look to me especially helpful for higher-order confidence.

A lack of *higher-order* evidence might be important, though? Like, if the problem isn't just that I'm balancing competing bits of the same sort of evidence, which point in opposite directions, but that I get evidence calling the force of my other evidence into question—the second friend says it's stupid to believe the first one, after she's lied to me before—then I could get the feeling that normative uncertainty could pile up very fast, with the whole situation becoming much more inscrutable for me (whether or not that would happen with an ideal epistemic agent).

Victor Johnson link
10/31/2022 12:22:21 pm

Radio draw girl. Ground buy sell skin sport fly civil plant. Society state night popular simple behind.

Reply



Leave a Reply.

    Kevin Dorst

    Philosopher at MIT, trying to convince people that their opponents are more reasonable than they think

    Quick links:
    - What this blog is about
    - ​Reasonably Polarized series
    - RP Technical Appendix

    Follow me on Twitter or join the newsletter for updates.

    RSS Feed

    Archives

    April 2021
    March 2021
    October 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020

    Categories

    All
    All Most Read
    Conjunction Fallacy
    Framing Effects
    Gambler's Fallacy
    Overconfidence
    Polarization
    Rationalization
    Reasonably Polarized Series

  • Bio
  • Research
  • Teaching
  • Public Philosophy
  • Stranger Apologies