<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:dc="http://purl.org/dc/elements/1.1/" >

<channel><title><![CDATA[KEVIN DORST - Substack]]></title><link><![CDATA[https://www.kevindorst.com/stranger_apologies]]></link><description><![CDATA[Substack]]></description><pubDate>Fri, 20 Mar 2026 11:41:12 -0400</pubDate><generator>Weebly</generator><item><title><![CDATA[Stranger Apologies is back! Now on Substack.]]></title><link><![CDATA[https://www.kevindorst.com/stranger_apologies/stranger-apologies-is-back-now-on-substack]]></link><comments><![CDATA[https://www.kevindorst.com/stranger_apologies/stranger-apologies-is-back-now-on-substack#comments]]></comments><pubDate>Sat, 03 Jun 2023 14:38:57 GMT</pubDate><category><![CDATA[Uncategorized]]></category><guid isPermaLink="false">https://www.kevindorst.com/stranger_apologies/stranger-apologies-is-back-now-on-substack</guid><description><![CDATA[ 	 		 			 				 					 						  Sign up for regular posts on why people are more rational than you think.Here's the first post.&nbsp;TLDR:&nbsp;We&rsquo;ve lost our epistemic empathy. Irrationalist narratives from psychology are partly to blame. Recent work from psychology and philosophy changes the narrative. Sign up to see why.Click here for latest post.   					 								 					 						          					 							 		 	  [...] ]]></description><content:encoded><![CDATA[<div><div class="wsite-multicol"><div class="wsite-multicol-table-wrap" style="margin:0 -15px;"> 	<table class="wsite-multicol-table"> 		<tbody class="wsite-multicol-tbody"> 			<tr class="wsite-multicol-tr"> 				<td class="wsite-multicol-col" style="width:51.215805471125%; padding:0 15px;"> 					 						  <div class="paragraph" style="text-align:justify;"><strong><a href="https://kevindorst.substack.com/about" target="_blank">Sign up</a></strong> for regular posts on why people are more rational than you think.<br /><br /><strong><a href="https://kevindorst.substack.com/p/stranger-apologies" target="_blank">Here's the first post</a>.&nbsp;<br />TLDR</strong>:&nbsp;<em style="color:rgb(64, 64, 64)"><span>We&rsquo;ve lost our epistemic empathy. Irrationalist narratives from psychology are partly to blame. Recent work from psychology and philosophy changes the narrative. Sign up to see why.</span></em><br /><br /><br /><a href="https://kevindorst.substack.com/" target="_blank">Click here for latest post</a>.</div>   					 				</td>				<td class="wsite-multicol-col" style="width:48.784194528875%; padding:0 15px;"> 					 						  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"> <a href='https://kevindorst.substack.com/' target='_blank'> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/23-6-3-substack-image.jpeg?1685803672" alt="Picture" style="width:291;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>   					 				</td>			</tr> 		</tbody> 	</table> </div></div></div>]]></content:encoded></item><item><title><![CDATA[Belief Perseverance is Rational]]></title><link><![CDATA[https://www.kevindorst.com/stranger_apologies/belief-perseverance-is-rational]]></link><comments><![CDATA[https://www.kevindorst.com/stranger_apologies/belief-perseverance-is-rational#comments]]></comments><pubDate>Sat, 03 Apr 2021 04:00:00 GMT</pubDate><category><![CDATA[All Most Read]]></category><guid isPermaLink="false">https://www.kevindorst.com/stranger_apologies/belief-perseverance-is-rational</guid><description><![CDATA[(1700 words; 10 minute read.&nbsp; This post was inspired by a suggestion from Chapter 2 (p. 60) of Jess Whittlestone's dissertation on confirmation bias.)&#8203;  Uh oh.&nbsp; No family reunion this spring&mdash;but still, Uncle Ron managed to get you on the phone for a chat.&nbsp; It started pleasantly enough, but now the topic has moved to politics.He says he&rsquo;s worried that Biden isn&rsquo;t actually running things as president, and that instead more radical folks are running the show.& [...] ]]></description><content:encoded><![CDATA[<div class="paragraph"><font color="#818181">(1700 words; 10 minute read.&nbsp; This post was inspired by a suggestion from Chapter 2 (p. 60) of <a href="https://jesswhittlestone.com/" target="_blank">Jess Whittlestone's</a> dissertation <a href="http://wrap.warwick.ac.uk/95233/" target="_blank">on confirmation bias</a>.)<br />&#8203;</font><br /></div>  <div class="paragraph">Uh oh.<span>&nbsp; </span>No family reunion this spring&mdash;but still, Uncle Ron managed to get you on the phone for a chat.<span>&nbsp; </span>It started pleasantly enough, but now the topic has moved to politics.<br /><br />He says he&rsquo;s worried that Biden isn&rsquo;t actually running things as president, and that instead more radical folks are running the show.<span>&nbsp; </span>He points to some <a href="https://www.instagram.com/p/CMhzMH9DPJG/?utm_source=ig_embed">video anomalies</a> (&ldquo;hand floating over mic&rdquo;) that led to the theory that many apparent videos of Biden are fake.<br /><br />You point out that the basis for that theory <a href="https://www.usatoday.com/story/news/factcheck/2021/03/19/fact-check-unaltered-video-biden-reporters-white-house-lawn/4748661001/">has been debunked</a>&mdash;the relevant anomaly was due to a video compression error, and several other videos taken from the same angle show the same scene.<br /><br />He reluctantly seems to accept this. The conversation moves on.<br /><br />But a few days later, you see him posting on social media about how that same video of Biden may have been fake!<br /><br />Sigh.<span>&nbsp;</span>Didn&rsquo;t you <em>just</em> correct that mistake? Why do people cling on to their beliefs even after they&rsquo;ve been debunked?<span>&nbsp; </span>This is the problem; another instance of people&rsquo;s irrationality leading to our polarized politics&mdash;right?<br /><br />&#8203;Well, it <em>is</em> a problem. But it may be all the thornier because it&rsquo;s often rational.</div>  <div>  <!--BLOG_SUMMARY_END--></div>  <div class="paragraph">Real-world political cases are messy, so let&rsquo;s turn to the lab.<span>&nbsp; </span>The phenomenon is what&rsquo;s known as <a href="https://en.wikipedia.org/wiki/Belief_perseverance#:~:text=Belief%20perseverance%20(also%20known%20as,effect%20(compare%20boomerang%20effect)."><strong>belief perseverance</strong></a>: after being provided with evidence to induce a particular belief, and then later being told that the evidence was actually bunk, people tend to still maintain the induced belief to some extent (<a href="https://psycnet.apa.org/record/1976-07163-001">Ross et al. 1975</a>; <a href="https://www.sciencedirect.com/science/article/pii/S002210310600031X?casa_token=mwyJ9En61JYAAAAA:vMcfK1q5_G95i5MjrkHKQXL_ldOVbvPTm2nqaJUz3_QauD5syE4x5Cmf7BtpHbgL5M-tYQiH">McFarland et al. 2007</a>).<br /><br />&#8203;A classic example: the <strong>debriefing paradigm</strong>.<span>&nbsp; </span>Under the cover story of testing subjects&rsquo; powers of social perception, researchers gave them 15 pairs of personal letters, and asked them to identify which was written by a real person and which was written by the experimenters.<br /><br />Afterwards, they received feedback on how well they did. Unbeknownst to the subjects, that feedback was bogus&mdash;they were (at random) told that they got either 14 of 15 notes correct (positive feedback), or 4 of 15 correct (negative feedback). Both groups were told the average person got 9 of 15 correct.<br /><br />As a result of this feedback, they come to believe that they are above (or below) average on this sort of task.<br /><br />Some time later, subjects are &ldquo;debriefed&rdquo;: told that the feedback that they received was bunk, and had nothing to do with their actual performance. They&rsquo;re then asked to estimate how well they would do on the task if they were to perform it on a new set of letters.<br /><br />The <strong>belief perseverance effect</strong> is that, on average, those who received the positive feedback expected to do better on a new set of letters than those who received the negative feedback&mdash;despite the fact that they had both been told that the feedback they received was bunk!<br /><br />So even after the evidence that induced their belief had been debunked, they still maintained their belief to some degree.<br /><br />And rationally so, I say.<br /><br />Imagine that Baya the Bayesian is doing this experiment.<span>&nbsp;</span>She&rsquo;s considering how well she&rsquo;d <strong>P</strong>erform on a future version of the test&mdash;trying to predict the value of a variable, <strong>P</strong>, that ranges from 1&ndash;15 (the number she&rsquo;d get correct).<br /><br />She doesn&rsquo;t know how she&rsquo;d perform, but she can form an <em>estimate</em> of this quantity, <strong>E[P]</strong>: a weighted average of the various values P might take, with weighs determined by how likely she thinks they are to obtain.<br /><br />Suppose she starts out with no idea&mdash;she&rsquo;s equally confident that she&rsquo;d get any score between 1&ndash;15:</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/21-4-3-flat-expectations.jpeg?1617373108" alt="Picture" style="width:431;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">Then her estimate for P is just the straight average of these numbers:<br /><br />E[P]&nbsp; =&nbsp; 1/15*1 + 1/15*2 + &hellip; + 1/15*15&nbsp; =&nbsp; 8<br /><br />Suppose now she&rsquo;s given positive feedback (&ldquo;You got 14 of 15 correct!&rdquo;)&mdash;label this information &ldquo;<strong>+F</strong>&rdquo;.<span>&nbsp; </span>Given this feedback, she can form a <em>new</em> estimate, <strong>E[P|+F]</strong> of how well she&rsquo;d do on a future test&mdash;and since her feedback was positive, that new estimate should be higher than her original estimate of 8.<br /><br />&#8203;Perhaps, given the feedback, she&rsquo;s now relatively confident she&rsquo;d get a good score on the test&mdash;her probabilities look like this:</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/21-4-3-skewed-expectations_orig.jpeg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">Then her estimate for her performance is roughly 13:<br /><br />&#8203;E[P | +F]&nbsp; &asymp;&nbsp; 0.25 *15 + 0.2*14 + 0.15*13 + &hellip;<span>&nbsp; &nbsp; </span>=&nbsp; 13<br /><br />Now in the final stage, she&rsquo;s told that the positive feedback she received was bunk. <span>&nbsp; </span>So the total information she received is &ldquo;positive feedback (+F), but told that it was bunk&rdquo;. What should her final estimate of her future performance, <strong>E[P | +F, told bunk]</strong>, be?<br /><br />Well, how much does she believe the experimenters?<span>&nbsp; </span>After all, they&rsquo;ve already admitted they&rsquo;ve lied to her at least once; so how likely does the fact that they <em>told</em> her the feedback was bunk make it that the feedback <em>in fact</em> was bunk?<span>&nbsp; </span>Something less than 100%, presumably.<br /><br />Now, what she was <em>told</em> about whether the feedback was bunk is relevant to her estimate of her future performance only because it is evidence about whether the feedback was <em>actually</em> bunk.<span>&nbsp;</span>Thus information about whether the feedback was bunk should &ldquo;screen off&rdquo; what the experimenters told her.<span>&nbsp; </span>(If a hyper-reliable source told her &ldquo;The feedback was legit&mdash;they were lying to you when they told you it was bunk&rdquo;&mdash;she should go back to estimating that her performance would be quite good, since the legitimate feedback was positive.)<br /><br />Upshot: so long as Baya doesn&rsquo;t <em>completely trust</em> the psychologist&rsquo;s claim that the information was bunk&mdash;as she shouldn&rsquo;t, since they&rsquo;re in the business of deception&mdash;then she should still give some credence to the possibility that the feedback wasn&rsquo;t bunk.</div>  <div class="paragraph" style="text-align:right;"><font color="#818181">(800 words left)</font></div>  <div class="paragraph"><strong>Crucial point:</strong> if she&rsquo;s unsure whether the feedback she received was bunk, her final estimate for her future performance should be an <em>average</em> of her two previous estimates.<br /><br />&#8203;Conditional on the feedback <em>actually</em> being bunk, her estimate should revert to it&rsquo;s initial value of 8:<br /><br />E[P | +F, told bunk, is bunk] = 8.<br /><br />Meanwhile, conditional on the feedback <em>actually</em> being <em>not</em> bunk, her estimate should jump up to it&rsquo;s previous high value of 13:<br /><br />E[P | +F, told bunk, not bunk] = 13.<br /><br />When she doesn&rsquo;t know whether the feedback is bunk or not, she should think to herself, &ldquo;Maybe it is bunk, in which case I&rsquo;d expect to get only a 8 on a future test; but maybe it&rsquo;s <em>not</em> bunk, in which case I&rsquo;d expect to get a 13 on a future test.&rdquo;<span>&nbsp; </span>Thus (by what&rsquo;s known as the <a href="https://en.wikipedia.org/wiki/Law_of_total_expectation">law of total expectation</a>) her overall estimate should be an average of these two possibilities, with weights determined by how likely she thinks they are to obtain.<br /><br />So, for example, suppose she thinks it&rsquo;s 80% likely that the psychologists are telling her the truth when they told her the feedback was bunk.<span>&nbsp; </span>Then her final estimate should be:<br /><br />E[P | +F, told bunk]&nbsp; =&nbsp; 0.8*8 + 0.2*13 <span>&nbsp;</span>=&nbsp; 9.<br /><br />This value is greater, of course, than her original estimate of 8. (The reasoning generalizes; see the Appendix below.)<br /><br />By exactly parallel reasoning, if Baya were instead given <em>negative</em> feedback (<strong>&ndash;F)</strong> at the beginning, she would instead end up with an estimate slightly <em>lower</em> than her initial value of 8; say, E[P | &ndash;F, told bunk]&nbsp; =&nbsp; 7.<br /><br />As a result, there <em>should</em> be a difference between the two conditions.<span>&nbsp; </span>Even after being told the feedback was bunk, both groups should wonder whether it in fact was.<span>&nbsp; </span>Because of these doubts, those who received positive feedback should adjust their estimates slightly up; those who received negative feedback should adjust them slightly down. Thus, even if our subjects are rational Bayesians, there&nbsp;<em>should</em> be a difference between the groups who received positive vs. negative feedback, even after being told it was bunk:<br /><br />E[P | +F , told bunk]&nbsp;&nbsp;<span>&nbsp;&nbsp;</span>&gt;&nbsp; &nbsp;E[P]&nbsp; &nbsp;&gt;&nbsp; &nbsp;E[P | &ndash;F, told bunk].<br /><br />The belief perseverance effect should be expected of rational people.<br /><br />A prediction of this story is that insofar as the subjects are inclined to trust the experimenters, their estimates of their future performance should be far <em>less</em> affected by the feedback after being told it was bunk than before so.<span>&nbsp; </span>(The more they trust them, the more weight their initial estimate plays in the final average.)<br /><br />This is exactly what <a href="https://www.sciencedirect.com/science/article/abs/pii/S002210310600031X?casa_token=mwyJ9En61JYAAAAA:vMcfK1q5_G95i5MjrkHKQXL_ldOVbvPTm2nqaJUz3_QauD5syE4x5Cmf7BtpHbgL5M-tYQiH" target="_blank">the studies</a> find. For subjects who are never told that the feedback was bunk, those who received negative feedback estimated their future performance to be 4.46, while those who received positive feedback estimated it to be 13.18.<br /><br />In contrast, for subjects who were told the feedback was bunk, those who received negative feedback estimated their future performance at 7.96, while those who received positive feedback estimated it at 9.33.<br /><br />There is a statistically significant difference between these two latter values&mdash;that&rsquo;s the belief perseverance effect&mdash;but it is much smaller than the initial divergence.&nbsp; As the rational story predicts.<br /><br />Another prediction of this rational story is that insofar as psychologists can get subjects to <em>fully</em> believe that the feedback really was bunk, the belief perseverance effect should disappear.<br /><br />Again, this is <a href="https://www.sciencedirect.com/science/article/abs/pii/S002210310600031X?casa_token=mwyJ9En61JYAAAAA:vMcfK1q5_G95i5MjrkHKQXL_ldOVbvPTm2nqaJUz3_QauD5syE4x5Cmf7BtpHbgL5M-tYQiH" target="_blank">what they find</a>.<span>&nbsp; </span>Some subjects where given a much more substantial debriefing&mdash;explaining that the task itself is not a legitimate measure of anything (<em>no one</em> can reliably identify the real letters).<span>&nbsp; </span>Such subjects exhibited no belief perseverance at all.<br /><br />Upshot: in the lab, the belief perseverance effect could well be fully rational.<br /><br /><strong>Okay. But what about <em>Uncle Ron?</em></strong><br /><br />Well, obviously real-world cases like this are much more complicated. But it does share some structure with the above lab example.<br /><br />Ron originally had some some (perhaps quite low) degree of belief that something fishy was going on with Biden. He then saw a video which boosted that level of confidence. Finally, he was then told that the video was bunk.<br /><br />So long as he doesn&rsquo;t <em>complete trust</em> the source that debunks the video, it makes sense for him to remain slightly more suspicious than he was originally.<span>&nbsp; </span><em>How</em> suspicious, of course, depends on how much he ought to believe that the video really was bunk.<br /><br />But even if he trusts the debunker quite a bit, a small bump in suspicion will remain. And if he sees a <em>lot</em> of bits of evidence like this, then <em>even if</em> he&rsquo;s pretty confident that each one is bunk, his suspicions might reasonably start to accumulate.<span>&nbsp; &nbsp;</span><br /><br />If that's right, the fall into conspiracy theories is an epistemic form of death-by-a-thousand-cuts. The tragedy is that rationality may not guard against it.<br /><br /><br />What next?<br /><strong>If you</strong>&#8203;&nbsp;<strong>liked this post</strong>, consider sharing it on <a href="https://twitter.com/kevin_dorst/status/1378345350868697089" target="_blank">social media</a> or <a href="https://mailchi.mp/279517050568/stranger_apologies_signup">signing up for the newsletter</a>.<br /><strong>For more on belief perseverance</strong>, check out the recent controversy over the "<strong>backfire effect</strong>"&mdash;the result that sometimes people double-down on their beliefs in the face of corrections. See e.g. <a href="https://link.springer.com/article/10.1007/s11109-010-9112-2" target="_blank">Nyhan and Reifler 2010</a>&nbsp;for the initial effect,&nbsp;and then&nbsp;<a href="https://link.springer.com/article/10.1007/s11109-018-9443-y" target="_blank">Wood and Porter 2019</a>&nbsp;for a replication attempt that argues that the effect is <em>very</em> rare.<br /><strong>For more discussion</strong>, see the <a href="https://www.reddit.com/r/philosophy/comments/mlzidy/the_fall_into_conspiracy_theories_is_an_epistemic/" target="_blank">reddit thread on this post</a>.</div>  <div class="paragraph"><br /><strong><font size="5"><u>Appendix</u></font></strong><br /><br />Consider the Baya case. Why does the reasoning go through generally?<br /><br />P is a random variable&mdash;a function from possibilities to numbers&mdash;that measures how well she would do on a test if she were to retake it.<span>&nbsp; </span>Given her (probabilistic) credence function C, her estimate for P is<br /><br /></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/21-4-3-a-1.png?1617457406" alt="Picture" style="width:242;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph"><span style="color:rgb(42, 42, 42)">In general, her&nbsp;</span><em style="color:rgb(42, 42, 42)">conditional</em><span style="color:rgb(42, 42, 42)">&nbsp;estimate for P, given information X, is:</span><br /></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/21-4-3-a-2.png?1617457447" alt="Picture" style="width:279;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph"><span style="color:rgb(42, 42, 42)">Here are the needed assumptions for the above reasoning to go through.</span><br /><br /><span style="color:rgb(42, 42, 42)">First, information about whether the feedback was bunk screens off whether she was&nbsp;</span><em style="color:rgb(42, 42, 42)">told</em><span style="color:rgb(42, 42, 42)">&nbsp;it was bunk from her estimate about P:</span><br /></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/21-4-3-a-3.png?1617457472" alt="Picture" style="width:471;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph"><span style="color:rgb(42, 42, 42)">and</span></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/21-4-3-a-4.png?1617457505" alt="Picture" style="width:474;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph"><span style="color:rgb(42, 42, 42)">What should her estimate be in those two cases?</span><br /><br /><span style="color:rgb(42, 42, 42)">Well, conditional on the feedback but also it being bunk, she should revert to her initial estimate E[P]:</span><span style="color:rgb(42, 42, 42)">&#8203;&#8203;&#8203;</span></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/21-4-3-a-5.png?1617457532" alt="Picture" style="width:269;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph"><span style="color:rgb(42, 42, 42)">Conditional on it&nbsp;</span><em style="color:rgb(42, 42, 42)">not</em><span style="color:rgb(42, 42, 42)">&nbsp;being bunk, she should move her estimate in the direction of the feedback&mdash;so if the feedback is positive, she should raise her estimate of her performance:</span><br /></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/21-4-3-a-6.png?1617457561" alt="Picture" style="width:282;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph"><span style="color:rgb(42, 42, 42)">But of course she doesn&rsquo;t know whether it was bunk or not&mdash;all she knows is that she was&nbsp;</span><em style="color:rgb(42, 42, 42)">told</em><span style="color:rgb(42, 42, 42)">&nbsp;it was bunk.</span><span style="color:rgb(42, 42, 42)">&nbsp;&nbsp;</span><span style="color:rgb(42, 42, 42)">By the law of total expectation, and then our screening off assumption, we have:</span><br /></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/screen-shot-2021-04-02-at-10-48-52-am.png?1617457567" alt="Picture" style="width:628;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">By parallel reasoning, if she gets negative feedback she should end up with a <em>lower</em> estimate than her initial one:<br /></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/21-4-3-a-7.png?1617457591" alt="Picture" style="width:271;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph"><span style="color:rgb(42, 42, 42)">And thus, whenever she doesn&rsquo;t completely trust the experimenters when they tell her the info is bunk, the feedback should still have an effect:</span><br /></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/21-4-3-a-8.png?1617457637" alt="Picture" style="width:396;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph"><span style="color:rgb(42, 42, 42)">&#8203;Which, of course, is just the belief perseverance effect.</span></div>]]></content:encoded></item><item><title><![CDATA[Update]]></title><link><![CDATA[https://www.kevindorst.com/stranger_apologies/update]]></link><comments><![CDATA[https://www.kevindorst.com/stranger_apologies/update#comments]]></comments><pubDate>Mon, 15 Mar 2021 16:54:58 GMT</pubDate><category><![CDATA[Uncategorized]]></category><guid isPermaLink="false">https://www.kevindorst.com/stranger_apologies/update</guid><description><![CDATA[Apologies for dropping off the map there&mdash;turns out moving and starting a new job in the midst of a pandemic takes up some bandwidth!&nbsp;&nbsp;Now that I've found my feet a bit, I just wanted to post a few quick blog updates:What's next?My "blog ideas" folder is overflowing, so expect&nbsp;a series of short, fun explorations of apparent biases in the coming months.&nbsp;We'll play with simple Bayesian models of why beliefs persist after corrections, in what sense rationalization is ration [...] ]]></description><content:encoded><![CDATA[<div class="paragraph">Apologies for dropping off the map there&mdash;turns out moving and starting a new job in the midst of a pandemic takes up some bandwidth!&nbsp;&nbsp;<br /><br />Now that I've found my feet a bit, I just wanted to post a few quick blog updates:<ul><li><strong>What's next?</strong><ul><li>My "blog ideas" folder is overflowing, so expect&nbsp;a series of short, fun explorations of apparent biases in the coming months.&nbsp;</li><li>We'll play with simple Bayesian models of why beliefs persist after corrections, in what sense rationalization is rational, how (standard, precise)&nbsp;Bayesians can be averse to ambiguity, and even (maybe) why being hungry and tired can (unfortunately!) make it rational to grade more harshly.&nbsp; Stay tuned!</li></ul></li><li><strong>What happened to rational polarization?</strong><ul><li>As followers of this blog will have noticed, the <strong><em><a href="https://www.kevindorst.com/stranger_apologies/rp" target="_blank">Reasonably Polarized</a></em></strong>&nbsp;series never really wrapped up. That's because there's a&nbsp;<em>bunch</em>&nbsp;more to say&mdash;about how evidential ambiguity can make&nbsp;"overconfidence" and&nbsp;motivated reasoning rational, how it can help explain why polarization is increasing, and even how it fits with the fact that both sides of our political disagreements&nbsp;think that the <em>other</em> side is&nbsp;<em>ir</em>rational.&nbsp;&nbsp;</li><li>But instead of focusing on the blog versions, I'm currently (finally!) focusing on writing up the academic paper. Once that's done, I'll revisit those topics here.</li><li>In the meantime, if you want to get a whirlwind tour of the most recent version of the argument for rational polarization, check out the recording of this <strong><a href="https://www.youtube.com/watch?v=-goiwpyd6mA&amp;ab_channel=CenterforPhilosophyofScience" target="_blank">recent presentation</a></strong>&nbsp;I gave (<a href="http://www.kevindorst.comhttps://www.kevindorst.com/uploads/8/8/1/7/88177244/21.3_rp_handout.pdf" target="_blank">handout here</a>).</li></ul></li><li><strong>What am I going on about?</strong><ul><li>I get a lot of questions of the form, "You say evidence is '<a href="https://www.kevindorst.com/stranger_apologies/rp" target="_blank">ambiguous</a>' when it warrants&nbsp;<em>higher-order uncertainty</em>, which we can model with higher-order probabilities. But does higher-order probability even make sense?&nbsp;(Or sometimes: didn't Leonard Savage show in the 50s that higher-order probability is confused?)"</li><li>Great question.&nbsp;It does. (And he didn't.)&nbsp;The fact that we can have probabilities about our probabilities is no more puzzling than the fact that we can know things about what we know, or believe things about what we believe.&nbsp;<ul><li>If you have a basic familiarity with modal logic, <strong><a href="https://www.kevindorst.com/uploads/8/8/1/7/88177244/3.9_hop_and_wc_tasks.pdf" target="_blank">here's a short writeup</a></strong> that explains how to model and think about higher-order probability in a simple concrete case (the <a href="https://www.kevindorst.com/stranger_apologies/how-to-polarize-rational-people" target="_blank">word-completion task</a>).</li><li>If you want more in-depth explanation of how to think about such models, check out <a href="https://philpapers.org/rec/DORHE-2" target="_blank">this paper</a>&nbsp;or this <a href="https://philpapers.org/rec/DORHE-2" target="_blank">handbook article</a>.</li></ul></li></ul></li></ul><br />&#8203;More coming soon!</div>]]></content:encoded></item><item><title><![CDATA[Why Arguments Polarize Us]]></title><link><![CDATA[https://www.kevindorst.com/stranger_apologies/why-arguments-polarize-us]]></link><comments><![CDATA[https://www.kevindorst.com/stranger_apologies/why-arguments-polarize-us#comments]]></comments><pubDate>Sat, 31 Oct 2020 04:00:00 GMT</pubDate><category><![CDATA[Uncategorized]]></category><guid isPermaLink="false">https://www.kevindorst.com/stranger_apologies/why-arguments-polarize-us</guid><description><![CDATA[(1400 words; 7 minute read.)  The most important factor that drove Becca and me apart, politically, is that we went our separate ways, socially. I went to a liberal university in a city; she went to a conservative college in the country. I made mostly-liberal friends and listened to mostly-liberal professors; she made mostly-conservative friends and listened to mostly-conservative ones.&#8203;As we did so, our opinions underwent a large-scale version of the group polarization effect: the tendenc [...] ]]></description><content:encoded><![CDATA[<div class="paragraph"><font color="#818181">(1400 words; 7 minute read.)</font></div>  <div class="paragraph">The most important factor that <a href="https://www.kevindorst.com/stranger_apologies/rp">drove Becca and me apart</a>, politically, is that we went our separate ways, socially. I went to a liberal university in a city; she went to a conservative college in the country. I made mostly-liberal friends and listened to mostly-liberal professors; she made mostly-conservative friends and listened to mostly-conservative ones.<br /><br />&#8203;As we did so, our opinions underwent a large-scale version of the <a href="https://en.wikipedia.org/wiki/Group_polarization"><strong>group polarization effect</strong></a>: the tendency for group discussions amongst like-minded individuals to lead to opinions that are <em>more homogenous</em> and <em>more extreme</em> in the same direction as their initial inclination.<br /><br />The predominant force that drives the group polarization effect is simple: discussion amongst like-minded individuals involves <a href="https://psycnet.apa.org/record/1986-24477-001">sharing like-minded arguments</a>. (For more on the effect of&nbsp;<em>social influence</em>, see&nbsp;<a href="https://www.kevindorst.com/stranger_apologies/pandemic-polarization-is-reasonable">this post</a>.)<br /><br />Spending time with liberals, I was exposed to a predominance of liberal arguments; as a result, I become more liberal.<span>&nbsp; </span>Vice versa for Becca.<br /><br />Stated that way, this effect can seem commonsensical: <em>of course</em> it&rsquo;s reasonable for people in such groups to polarize. For example: I see more arguments in favor of gun control; Becca sees more arguments against them. So there&rsquo;s nothing puzzling about the fact that, years later, we have wound up with radically different opinions about gun control. Right?<br /><br />Not so fast.</div>  <div>  <!--BLOG_SUMMARY_END--></div>  <div class="paragraph">In 2010, Becca and I each knew where we were heading. I knew that I&rsquo;d be exposed mainly to arguments in favor of gun control, and that she&rsquo;d be exposed mainly to arguments against it.<span>&nbsp; </span>At that point, we both had fairly non-committal views about gun rights&mdash;meaning that I didn&rsquo;t expect the liberal arguments I&rsquo;d witness to be more probative than the conservative arguments she&rsquo;d witness.<br /><br />This gives us the puzzle. At a certain level of generality, I know nothing about about gun rights now that I didn&rsquo;t in 2010.<span>&nbsp; </span>Back then, I knew I&rsquo;d see arguments in favor, but I was not yet persuaded.<span>&nbsp;</span>Now in 2020 I <em>have</em> seen those arguments, and I <em>am</em> persuaded.<br /><br />Why the difference?<span>&nbsp;</span>How could <em>receiving</em> the arguments in favor of gun control have a (predictably) different effect on my opinion than <a href="https://philpapers.org/rec/SALTEG"><em>predicting </em>that I&rsquo;d receive such arguments?</a> And given that I could&rsquo;ve easily gone to a more conservative college and wound up with Becca&rsquo;s opinions, doesn&rsquo;t this mean that my current opinion that we need gun control was <a href="http://www.miriamschoenfield.com/F/why-do-you-believe-what-you.pdf">determined by <em>arbitrary factors</em></a>. In light of this, <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1520-8583.2010.00204.x">how can I maintain my firm belief?</a><br /><br /><a href="https://www.kevindorst.com/stranger_apologies/rational-polarization-can-be-profound-persistent-and-predictable">As we&rsquo;ve seen</a>, ambiguous evidence can in principle offer an answer to these questions. Here I&rsquo;ll explain how this answer can apply to the group polarization effect.<br /><br />Begin with a simple question: <strong>why do arguments persuade? </strong>You might think that&rsquo;s a silly question&mdash;arguments in favor of gun control provide evidence for the value of gun control, and people respond to evidence; so rational people are predictably convinced by arguments. Right?<br /><br />Wrong. Arguments in favor of gun control don&rsquo;t <em>necessarily</em> provide evidence for gun control&mdash;it depends on how good the argument is!<br /><br />When someone presents you with an argument for gun control, the total evidence you get is more than just the facts the argument presents; you also get evidence that the person was <em>trying to convince you</em>, and so that they were appealing to the most convincing facts they could think of.<br /><br />If the facts were more convincing than you expected&mdash;say, &ldquo;Guns are the <a href="https://www.nejm.org/doi/full/10.1056/nejmsr1804754">second-leading cause of death in children</a>&rdquo;&mdash;then you get evidence favoring gun control. But if the facts were <em>less</em> convincing than you expected&mdash;say, &ldquo;Many people think we should ban assault weapons&rdquo;&mdash;then the fact that this was the argument they came up with actually provides evidence <em>against</em> gun control. (H/t <a href="https://twitter.com/KevinZollman/status/1316778404109221889">Kevin Zollman on October surprises</a>.)<br /><br />This is a&nbsp;<a href="https://www.kevindorst.com/stranger_apologies/what-is-rational-polarization">special case of the fact that</a>, when evidence is unambiguous, <a href="https://philpapers.org/rec/SALTEG">there&rsquo;s no way</a> to investigate an issue that you can expect to make your more confident in your opinion.<br /><br />Why, then, do arguments for a claim predictably shift opinions about it?<br /><br />My proposal: by generating asymmetries in ambiguity. They make it so that reasons <em>favoring</em> the claim are less ambiguous&mdash;and so easier to recognize&mdash;than those telling against it.<br /><br />Here&rsquo;s an example&mdash;about Trump, not gun control.</div>  <blockquote><a href="https://www.vox.com/2020/10/14/21515946/rebeccah-heinrichs-the-ezra-klein-show-donald-trump-chinahttps://www.vox.com/2020/10/14/21515946/rebeccah-heinrichs-the-ezra-klein-show-donald-trump-china">Argument:</a> &ldquo;Trump&rsquo;s foreign policy has been a success. In particular, his norm-breaking personality was needed in order to shift the political consensus about foreign policy and address the growing confrontation between the U.S. and an increasingly aggressive and dictatorial China.&rdquo;</blockquote>  <div class="paragraph">This is an argument that we should re-elect Trump.<span>&nbsp;</span>Is this a good argument&mdash;that is, does it provide evidence in favor of Trump being re-elected?<br /><br />&#8203;I&rsquo;m not sure&mdash;the evidence is ambiguous. What I am sure of is that if it <em>is</em> a good argument, then it&rsquo;s so for relatively <strong>straightforward reasons</strong>: &ldquo;Trump&rsquo;s presidency has had and will have good long-term effects.&rdquo;<br /><br />If it&rsquo;s <em>not</em> a good argument, then it&rsquo;s for relatively <strong>subtle reasons</strong>: perhaps we should think, &ldquo;Is that the best they can come up with?&rdquo;; perhaps we should think that relations with China were already changing before Trump; perhaps we should be unsure whether inflaming the confrontation is a good thing; etc.<br /><br />Regardless: if it&rsquo;s a good argument, it&rsquo;s easier to recognize as such than if it&rsquo;s a bad argument As a result, we can expect to be, on average, somewhat persuaded by arguments like this.<br /><br />As before, it&rsquo;s entirely possible for this to be so and yet for the argument to satisfy the value of evidence&mdash;and, therefore, for us to expect that it&rsquo;ll make us more accurate to listen to the argument rather than ignore it.&nbsp; At each instant we're given the option to listen to a new argument or not, we should take it if we want to make our beliefs accurate.<br /><br />And yet, because each argument is predictably (somewhat) persuasive, long periods of exposure to pro (con) arguments can lead to predictable, profound, but rational shifts in opinions.<br /><br />We can model how this worked with me and Becca.<span>&nbsp; </span>The blue group (me and my liberal friends) were exposed to arguments favoring gun control&mdash;that is, arguments that were less ambiguous when they supported gun control than when they told against it. The red group (Becca and her conservative friends) were exposed to arguments disfavoring gun control&mdash;that is, arguments that were more ambiguous when they supported gun control than when they told against it.<br /><br />Suppose that, as a matter of fact, 50% of the arguments point in each direction.<span>&nbsp;Because of the asymmetries in our ability to&nbsp;<em>recognize</em>&nbsp;which arguments are good and which are bad, t</span>he result is polarization:</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/10-24-b7-50-args_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%">Simulation of 20 (blue) agents repeatedly presented with arguments that are less ambiguous when they support Claim, and 20 (red) agents presented with arguments that are less ambiguous when they tell against Claim. In fact, 50% of arguments point in each direction. Thick lines are averages within groups.</div> </div></div>  <div class="paragraph">Notably, although such ambiguity asymmetries are a force for divergence in opinion, that force can be overcome.<span>&nbsp; </span>In particular, as the proportion of actually-good arguments gets further from 50%, convergence is possible even in the presence of such ambiguity-asymmetries.<span>&nbsp; </span>Here&rsquo;s what happens when 80% of the arguments provide evidence for gun control:</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/10-24-b7-80-args_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%">As above, but this time, in fact, 80% of arguments tell in favor of Claim.</div> </div></div>  <div class="paragraph">Thus polarization due to differing arguments is not inevitable&mdash;but the more conflicted and ambiguous the evidence is, the more likely it is.<br /><br />How plausible is this as a model of the group polarization effect? Obviously there&rsquo;s much more to say, but there is indeed some evidence that group-polarization effects&mdash;and argumentative persuasion in particular&mdash;are driven by ambiguous evidence.<br /><br />First, the group discussions that induce polarization also tend to <a href="https://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?article=1087&amp;context=public_law_and_legal_theory" target="_blank">reduce the variance in people's opinions</a>, and the amount of group shift in opinion is <a href="https://www.sciencedirect.com/science/article/pii/S0065260108600945" target="_blank">correlated with the initial variance of opinion</a>.&nbsp;<br /><br />These effects make&nbsp;sense if ambiguous arguments are driving the process.&nbsp;For: (1) the more ambiguous evidence is, the more variance of opinion we can expect;&nbsp;(2) if what people are doing is coordinating on what to think about various pieces of evidence, we would expect discussion to reduce ambiguity and hence reduce opinion-variance; and (3) the more initial variance there is, the more diverse the initial bodies of evidence were&mdash;so the more we should discussion to reveal <em>new</em>&nbsp;arguments, and hence to lead participants to shift their opinions.<br /><br />Second, participation in the discussion (as opposed to mere observation) <a href="https://pdfs.semanticscholar.org/076d/f881ee65628ffc8c0cf1f73859f1d226d4dc.pdf" target="_blank">heightens the polarization effect</a>, as can <a href="https://psycnet.apa.org/record/1995-98997-004" target="_blank">merely thinking about an issue</a>&mdash;especially if people are trying to <a href="https://media.gradebuddy.com/documents/1657138/1d2520cf-4a80-4d38-be44-2b4255bc0a46.pdf" target="_blank">think of reasons for or against their position</a> or expecting to <a href="https://journals.sagepub.com/doi/abs/10.1177/014616728174020" target="_blank">have a debate with someone</a> about it.&nbsp;&nbsp;<br /><br />If what people are doing in such circumstances is&nbsp;<em>formulating arguments</em>&nbsp;(perhaps through a <a href="https://www.kevindorst.com/stranger_apologies/confirmation-bias-as-avoiding-ambiguity" target="_blank">mechanism of cognitive-search</a>), then these are exactly the effects that this sort of model of asymmetrically-ambiguous arguments would predict&mdash;for they are in effect exposing themselves to a series of asymmetrically-ambiguous arguments.<br /><br /><strong>Upshot:</strong>&nbsp;it is not irrational to be predictably persuaded by (ambiguous) arguments&mdash;and the more people separate into like-minded groups, the more this leads to polarization.<br /><br /><br />What next?<br /><strong>If you liked this post</strong>, consider <a href="https://mailchi.mp/279517050568/stranger_apologies_signup" target="_blank">subscribing to the newsletter</a> or <a href="https://twitter.com/kevin_dorst/status/1322526823918587906" target="_blank">spreading the word</a>.<br /><strong>For the details of the models</strong> of asymmetrically-ambiguous arguments and the simulations that used them, check out the <a href="https://www.kevindorst.com/reasonably-polarized-technical-appendix.html" target="_blank">technical appendix</a> (&sect;7).<br /><strong>Next Post:</strong>&nbsp;we'll return to confirmation bias and see how&nbsp;<strong><a href="https://en.wikipedia.org/wiki/Selective_exposure_theory" target="_blank">selective search</a></strong>&nbsp;for new information is rational in the context of ambiguous evidence.</div>]]></content:encoded></item><item><title><![CDATA[Confirmation Bias Maximizes Expected Accuracy]]></title><link><![CDATA[https://www.kevindorst.com/stranger_apologies/confirmation-bias-as-avoiding-ambiguity]]></link><comments><![CDATA[https://www.kevindorst.com/stranger_apologies/confirmation-bias-as-avoiding-ambiguity#comments]]></comments><pubDate>Sat, 17 Oct 2020 04:00:00 GMT</pubDate><category><![CDATA[All Most Read]]></category><category><![CDATA[Polarization]]></category><category><![CDATA[Reasonably Polarized series]]></category><guid isPermaLink="false">https://www.kevindorst.com/stranger_apologies/confirmation-bias-as-avoiding-ambiguity</guid><description><![CDATA[(1700 words; 8 minute read.)      What rational polarization looks like.   &#8203;It&rsquo;s September 21, 2020. Justice Ruth Bader Ginsburg has just died.&nbsp; Republicans are moving to fill her seat; Democrats are crying foul.&#8203;Fox News publishes&nbsp;an op-ed by Ted Cruz&nbsp;arguing that the Senate has a duty to fill her seat before the election. The New York Times publishes an&nbsp;op-ed on Republicans&rsquo; hypocrisy&nbsp;and Democrats&rsquo; options.Becca and I each read both. I&md [...] ]]></description><content:encoded><![CDATA[<div class="paragraph"><font color="#818181">(1700 words; 8 minute read.)</font></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"> <a href='https://www.kevindorst.com/uploads/8/8/1/7/88177244/edited/10-17-b6-cs-acc-polarization.jpg' rel='lightbox' onclick='if (!lightboxLoaded) return false'> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/10-17-b6-cs-acc-polarization.jpg?1602845022" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%">What rational polarization looks like.</div> </div></div>  <div class="paragraph"><br />&#8203;It&rsquo;s September 21, 2020. Justice Ruth Bader Ginsburg has just died.<span>&nbsp; </span>Republicans are moving to fill her seat; Democrats are crying foul.<br /><br /><span style="color:rgb(42, 42, 42)">&#8203;Fox News publishes&nbsp;</span><a href="https://www.foxnews.com/opinion/sen-ted-cruz-ginsburg-senate-election">an op-ed by Ted Cruz</a><span style="color:rgb(42, 42, 42)">&nbsp;arguing that the Senate has a duty to fill her seat before the election. The New York Times publishes an&nbsp;</span><a href="https://www.nytimes.com/2020/09/21/opinion/ruth-bader-ginsburg-senate-democrats.html">op-ed on Republicans&rsquo; hypocrisy</a><span style="color:rgb(42, 42, 42)">&nbsp;and Democrats&rsquo; options.</span><br /><br /><span style="color:rgb(0, 0, 0)">Becca and I each read both. I&mdash;along with my liberal friends&mdash;conclude that Republicans are hypocritically and dangerous violating precedent.</span><span style="color:rgb(0, 0, 0)">&nbsp;</span><span style="color:rgb(0, 0, 0)">Becca&mdash;along with her conservative friends&mdash;concludes that Republicans are doing what needs to be done, and that Democrats are threatening to violate democratic norms (&ldquo;court packing??&rdquo;) in response.</span><br /><br /><span style="color:rgb(0, 0, 0)">In short: we both see the same evidence, but we react in opposite ways&mdash;ways that lead each of us to be confident in our opposing beliefs.</span><span style="color:rgb(0, 0, 0)">&nbsp; </span><span style="color:rgb(0, 0, 0)">In doing so, we exhibit a well-known form of </span><strong style="color:rgb(0, 0, 0)">confirmation bias</strong><span style="color:rgb(0, 0, 0)">.</span><br /><br /><span style="color:rgb(0, 0, 0)"><strong>And we are rational to do so</strong>: we both are doing what we should expect will make our beliefs most accurate.</span><span style="color:rgb(0, 0, 0)">&nbsp; </span><span style="color:rgb(0, 0, 0)">Here&rsquo;s why.</span></div>  <div>  <!--BLOG_SUMMARY_END--></div>  <div class="paragraph">&#8203;<span style="color:rgb(0, 0, 0)">Confirmation bias is the tendency to gather and interpret evidence in a way that can be expected to favor your prior beliefs (</span><a href="https://journals.sagepub.com/doi/abs/10.1037/1089-2680.2.2.175">Nickerson 1998</a><span style="color:rgb(0, 0, 0)">; </span><a href="http://wrap.warwick.ac.uk/95233/1/WRAP_Theses_Whittlestone_2017.pdf">Whittlestone 2017</a><span style="color:rgb(0, 0, 0)">). There are two parts two this tendency.</span></div>  <blockquote><strong style="color:rgb(42, 42, 42)">Selective exposure</strong><span style="color:rgb(42, 42, 42)">&nbsp;is the tendency to&nbsp;</span><em style="color:rgb(42, 42, 42)">look</em><span style="color:rgb(42, 42, 42)">&nbsp;for evidence that confirms your prior beliefs (</span><a href="https://www.sciencedirect.com/science/article/pii/S0065260108602129">Frey 1986</a><span style="color:rgb(42, 42, 42)">). This captures the fact that I (a liberal) tend to check the New York Times more than Fox News, and Becca (a conservative) does the opposite.</span><br /><br /><strong style="color:rgb(42, 42, 42)">Biased assimilation</strong><span style="color:rgb(42, 42, 42)">&nbsp;is the tendency to&nbsp;</span><em style="color:rgb(42, 42, 42)">interpet</em><span style="color:rgb(42, 42, 42)">&nbsp;evidence in way that favors your prior beliefs (</span><a href="https://psycnet.apa.org/record/1981-05421-001">Lord et al. 1979</a><span style="color:rgb(42, 42, 42)">).</span><span style="color:rgb(42, 42, 42)">&nbsp;&nbsp;</span><span style="color:rgb(42, 42, 42)">This is what happened when Becca and I read the same two op-eds about RBG&rsquo;s vacant seat and came to opposite conclusions about them.</span></blockquote>  <div class="paragraph"><span style="color:rgb(0, 0, 0)">Set aside selective exposure for now; today let&rsquo;s <u>focus on biased assimilation</u>.</span><span style="color:rgb(0, 0, 0)">&nbsp;&nbsp;</span><span style="color:rgb(0, 0, 0)">I&rsquo;m going to argue that it&rsquo;s the rational response to ambiguous evidence.</span><br /><br /><span style="color:rgb(0, 0, 0)">Consider what those who exhibit biased assimilation actually do (</span><a href="https://psycnet.apa.org/record/1981-05421-001">Lord et al. 1979</a><span style="color:rgb(0, 0, 0)">;&nbsp;</span><a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1540-5907.2006.00214.x">Taber and Lodge 2006</a><span style="color:rgb(0, 0, 0)">;&nbsp;</span><a href="https://www.jstor.org/stable/20620131">Kelly 2008</a><span style="color:rgb(0, 0, 0)">).<br /><br />&#8203;</span><span style="color:rgb(0, 0, 0)">They are presented with two pieces of evidence&mdash;one telling in favor of a claim C, one telling against it. They have limited time and energy to process this evidence.</span><span style="color:rgb(0, 0, 0)">&nbsp;&nbsp;</span><span style="color:rgb(0, 0, 0)">As a result, the group that believes C spends more time scrutinizing the evidence&nbsp;</span><em style="color:rgb(0, 0, 0)">against</em><span style="color:rgb(0, 0, 0)">&nbsp;C; the group that disbelieves C spends more time scrutinizing the evidence&nbsp;</span><em style="color:rgb(0, 0, 0)">in favor</em><span style="color:rgb(0, 0, 0)">&nbsp;of C.</span><br /><br /><span style="color:rgb(0, 0, 0)">In scrutinizing the evidence against their prior belief, what they are doing is looking for a flaw in the argument; a gap in the reasoning; or, more generally, an&nbsp;</span><strong style="color:rgb(0, 0, 0)">alternative explanation</strong><span style="color:rgb(0, 0, 0)">&nbsp;that could nullify the force of the evidence.</span><br /><br /><span style="color:rgb(0, 0, 0)">For example, when I read both op-eds, I spent a lot more time thinking about Cruz&rsquo;s reasons in favor of appointing someone (I even did some googling to fact check them).</span><span style="color:rgb(0, 0, 0)">&nbsp;</span><span style="color:rgb(0, 0, 0)">In doing so, I was able to spot the fact that some of the reasoning was misleadingly worded; for instance:</span></div>  <blockquote><span style="color:rgb(0, 0, 0)">&ldquo;</span><a href="https://www.nationalreview.com/2020/08/history-is-on-the-side-of-republicans-filling-a-supreme-court-vacancy-in-2020/?utm_source=recirc-desktop&amp;utm_medium=homepage&amp;utm_campaign=river&amp;utm_content=featured-content-trending&amp;utm_term=first">Twenty-nine times</a><span style="color:rgb(0, 0, 0)">&nbsp;in our nation&rsquo;s history we&rsquo;ve seen a Supreme Court vacancy in an election year or before an inauguration, and in every instance, the president proceeded with a nomination.&rdquo;</span></blockquote>  <div class="paragraph"><span style="color:rgb(0, 0, 0)">True. But this glosses over the fact that just 4 years ago, Obama did indeed &ldquo;proceed with a nomination&rdquo;&mdash;and in response Senate Republicans (</span><a href="https://www.theguardian.com/us-news/live/2016/mar/16/election-live-trump-clinton-sanders-primaries-missouri-presidential-campaign?page=with%3Ablock-56e99960e4b0072e644fc119">with Cruz&rsquo;s support</a><span style="color:rgb(0, 0, 0)">)&nbsp;</span><em style="color:rgb(0, 0, 0)">blocked</em><span style="color:rgb(0, 0, 0)">&nbsp;that nomination using the excuse that it was an election year.</span><br /><br /><span style="color:rgb(0, 0, 0)">The point? I decided to spend little time thinking about the details of the New York Times&rsquo;s argument, and so found little reason to object to it; instead, I spent my time scrutinizing Cruz&rsquo;s argument, and when I did I found reasons to discount it.</span><br /><br /><span style="color:rgb(0, 0, 0)">Meanwhile, Becca did the opposite: she scrutinized the New York Times&rsquo;s argument more than Cruz&rsquo;s, and in doing so no doubt found flaws in the argument.</span><br /><br /><span style="color:rgb(0, 0, 0)">Notice what that means: although Becca and I were presented with the same evidence initially, the way we chose to&nbsp;</span><em style="color:rgb(0, 0, 0)">process</em><span style="color:rgb(0, 0, 0)">&nbsp;it meant we ended up with&nbsp;</span><em style="color:rgb(0, 0, 0)">different</em><span style="color:rgb(0, 0, 0)">&nbsp;evidence by the end of it.</span><span style="color:rgb(0, 0, 0)">&nbsp;</span><span style="color:rgb(0, 0, 0)">I knew subtle details about Cruz&rsquo;s argument that Becca didn&rsquo;t notice; Becca knew subtle details about the New York Times argument that I didn&rsquo;t notice.</span><br /><br /><span style="color:rgb(0, 0, 0)">There are two claims I want to make about the way in which selective scrutiny led us to have different evidence.</span><br /><br /><span style="color:rgb(0, 0, 0)"><strong>First:</strong> such selective scrutiny leads to predictable shifts in our beliefs. For example, as I was setting out to scrutinize Fox&rsquo;s op-ed, I could expect that doing so would make me more confident in my prior belief that RBG&rsquo;s seat should not yet be replaced.</span><br /><br /><span style="color:rgb(0, 0, 0)"><strong>Second:</strong> nevertheless, such selective scrutiny is epistemically rational&mdash;if what you want is to get to the truth of the matter, it often makes sense to spend more energy scrutinizing evidence that disconfirms your prior beliefs than that which confirms them.</span><br /><br /><span style="color:rgb(0, 0, 0)">Why are these claims true?</span><br /><br /><span style="color:rgb(0, 0, 0)">Scrutinizing a piece of evidence is a form of&nbsp;</span><a href="https://mitpress.mit.edu/books/cognitive-search"><strong>cognitive search</strong></a><span style="color:rgb(0, 0, 0)">: you are searching for an alternative explanation that would fit the facts of the argument but remove its force.</span><br /><br /><span style="color:rgb(0, 0, 0)">If you&rsquo;ve kept up with this blog, that should sound familiar: it&rsquo;s a lot like searching your&nbsp;</span><em style="color:rgb(0, 0, 0)">lexicon</em><span style="color:rgb(0, 0, 0)">&nbsp;for a word that fits a string&mdash;i.e. a&nbsp;</span><a href="https://www.kevindorst.com/stranger_apologies/how-to-polarize-rational-people"><strong>word-completion task</strong></a><span style="color:rgb(0, 0, 0)">.</span><span style="color:rgb(0, 0, 0)">&nbsp;</span><span style="color:rgb(0, 0, 0)">When I look closely at Cruz&rsquo;s argument and search for flaws, cognitively what I&rsquo;m doing is just like when I look closely at a string of letters&mdash;say, &lsquo;_E_RT&rsquo;&mdash;and search for a word that completes it.</span><span style="color:rgb(0, 0, 0)">&nbsp;&nbsp;</span><span style="color:rgb(0, 0, 0)">(Hint: what&rsquo;s in your chest?)</span><br /><br /><span style="color:rgb(0, 0, 0)">In both cases, if I&nbsp;</span><em style="color:rgb(0, 0, 0)">find&nbsp;</em><span style="color:rgb(0, 0, 0)">what I&rsquo;m looking for (a problem with Cruz&rsquo;s argument; a word that completes the string) I get strong, unambiguous evidence, and so I know what to think (the argument is no good; the string is completable).</span><span style="color:rgb(0, 0, 0)">&nbsp;&nbsp;</span><span style="color:rgb(0, 0, 0)">But if I&nbsp;</span><em style="color:rgb(0, 0, 0)">try and fail</em><span style="color:rgb(0, 0, 0)">&nbsp;to find what I&rsquo;m looking for, I get weak, ambiguous evidence&mdash;I should be unsure whether to think the argument is any good; I should be unsure how confident to be that the string is completable.</span><span style="color:rgb(0, 0, 0)">&nbsp; &nbsp;</span><br /><br /><span style="color:rgb(0, 0, 0)">Thus scrutinizing an argument leads to predictable polarization in the exact same way our word-completion tasks do.</span><span style="color:rgb(0, 0, 0)">&nbsp;&nbsp;</span><span style="color:rgb(0, 0, 0)">If I find a flaw in Cruz&rsquo;s argument, my confidence in my prior belief goes way up; if I don&rsquo;t find a flaw, it goes only a little bit down.</span><span style="color:rgb(0, 0, 0)">&nbsp;</span><span style="color:rgb(0, 0, 0)">Thus, on average, <a href="https://www.kevindorst.com/stranger_apologies/rational-polarization-can-be-profound-persistent-and-predictable" target="_blank">selective scrutiny will increase my confidence</a>.</span><br /><br /><span style="color:rgb(0, 0, 0)"><strong>Nevertheless, such selective scrutiny is epistemically rational</strong>.</span><span style="color:rgb(0, 0, 0)">&nbsp;&nbsp;</span><span style="color:rgb(0, 0, 0)">Why?</span><br /><br /><span style="color:rgb(0, 0, 0)">Because it's a&nbsp;</span><em style="color:rgb(0, 0, 0)">good way to avoid ambiguous evidence</em><span style="color:rgb(0, 0, 0)">&mdash;and, therefore, is often a good way to make your beliefs more accurate.</span><br /><br /><span style="color:rgb(0, 0, 0)">To see this, ask yourself: would you rather do a word-completion task where, if there&rsquo;s a word, it&rsquo;s&nbsp;</span><em style="color:rgb(0, 0, 0)">easy</em><span style="color:rgb(0, 0, 0)">&nbsp;to find (like &lsquo;C_T&rsquo;), or&nbsp;</span><em style="color:rgb(0, 0, 0)">hard</em><span style="color:rgb(0, 0, 0)">&nbsp;to find (like &rsquo;_EAR_T&rsquo;)?</span><span style="color:rgb(0, 0, 0)">&nbsp;&nbsp;</span><span style="color:rgb(0, 0, 0)">Obviously you&rsquo;d prefer to do the former, since the easier it is to recognize a word, the easier it is to assess your evidence and come to an accurate conclusion.</span><br /><br /><span style="color:rgb(0, 0, 0)">Thus if you&rsquo;re given a choice between two different cognitive searches&mdash;scrutinize Cruz&rsquo;s argument, or scrutinize the NYT&rsquo;s&mdash;often the best way to get accurate beliefs is to&nbsp;</span><em style="color:rgb(0, 0, 0)">scrutinize the one where you expect to find a flaw.</em><br /><br /><span style="color:rgb(0, 0, 0)">Which one is that? More likely than not, the argument that&nbsp;</span><em style="color:rgb(0, 0, 0)">disconfirms</em><span style="color:rgb(0, 0, 0)">&nbsp;your prior beliefs, of course!</span><span style="color:rgb(0, 0, 0)">&nbsp;&nbsp;</span><span style="color:rgb(0, 0, 0)">For, given your prior beliefs, you should think that such arguments are more likely to contain flaws, and that their flaws will be easier to recognize.</span><br /><br /><span style="color:rgb(0, 0, 0)">Thus <em>I</em> expect <em>Cruz&rsquo;s</em> argument to contain a flaw, so I scrutinize it; and&nbsp;</span><em style="color:rgb(0, 0, 0)">Becca</em><span style="color:rgb(0, 0, 0)">&nbsp;expects the <em>NYT&rsquo;s</em> argument to contain flaw, so she scrutinizes&nbsp;</span><em style="color:rgb(0, 0, 0)">it</em><span style="color:rgb(0, 0, 0)">. These choices are rational&mdash;despite the fact that they predictably lead our believes to polarize.</span><br /><br /><span style="color:rgb(0, 0, 0)">We can buttress this conclusion formalizing and simulating this process.</span><br /><br /><span style="color:rgb(0, 0, 0)">Given your prior beliefs and a piece of evidence to scrutinize, we can calculate the&nbsp;</span><em style="color:rgb(0, 0, 0)">expected accuracy</em><span style="color:rgb(0, 0, 0)">&nbsp;of doing so.</span><span style="color:rgb(0, 0, 0)">&nbsp;&nbsp;</span><span style="color:rgb(0, 0, 0)">(As always, the belief-transitions in my models satisfy the&nbsp;</span><a href="https://www.kevindorst.com/stranger_apologies/what-is-rational-polarization">value of evidence</a><span style="color:rgb(0, 0, 0)">&nbsp;with respect to the live question&mdash;say, whether the evidence contains a flaw&mdash;so you always expect to get&nbsp;</span><em style="color:rgb(0, 0, 0)">more</em><span style="color:rgb(0, 0, 0)">&nbsp;accurate by scrutinizing it. See the&nbsp;</span><a href="https://www.kevindorst.com/reasonably-polarized-technical-appendix.html">technical appendix</a><span style="color:rgb(0, 0, 0)">.)</span><br /><br /><span style="color:rgb(0, 0, 0)">I randomly generated 10,000 such potential cognitive searches of pieces of evidence, and plotted how likely you are to find a flaw in the evidence (if there is one) against how accurate you expect scrutinizing the argument to make you. As can be seen, there is a substantial positive correlation between the two:</span></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/10-17-b6-find-acc-correlation_orig.jpeg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%">Simulation of 10,000 random cognitive-searches, plotting change of finding item you're searching for against expected accuracy of the search.</div> </div></div>  <div class="paragraph">This means that it makes sense to tend to scrutinize evidence for which you expect to be able to recognize its flaws&mdash;i.e., often, that which <em>dis</em>confirms your prior beliefs.<br /><br />In particular, suppose Becca and I started out each expecting 50% of the pieces of evidence for/against replacing RBG to contain flaws, but I am slightly better at finding flaws in the supporting evidence, and she is slightly better at finding flaws in the detracting evidence.<br /><br />Suppose then we are presented with a series of random pairs of pieces of evidence&mdash;one in favor, one against&mdash;and at each stage we decide to scrutinize the one that we expect to make us more accurate.<span>&nbsp; </span>Since accuracy is correlated with whether we expect to find flaws, this means that I will be slightly more likely to scrutinize the supporting evidence, and she will be slightly more likely to scrutinize the detracting evidence.<br /><br />As a result, we&rsquo;ll polarize.<span>&nbsp; </span>Even if, in fact, exactly 50% of the pieces of evidence tell in each direction, I will come to be confident that <em>fewer</em> than 50% of the pieces of evidence support replacing RGB, and she&rsquo;ll come to be confident that <em>more</em> than 50% of them do:</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/10-17-b6-cs-acc-polarization_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%">Simulation of two groups of agents scrutinizing bits of evidence about a fixed question. Red lines are agents like Becca, who are better at finding flaws in detracting evidence. Blue are agents like me, better at finding flaws in supporting evidence. Thick lines are averages of each group.</div> </div></div>  <div class="paragraph"><br /><strong>Upshot:</strong> <strong>biased assimilation can be rational. </strong>People with opposing beliefs who care only about the truth and are presented with the same evidence can be expected to polarize, since the best way to assess that evidence&nbsp;will often be to apply selective scrutiny to the evidence that disconfirms their beliefs.<br /><br />&#8203;There is some empirical support for this type of explanation (though more is needed). Biased assimilation is clearly driven by the process of selective scrutiny (<a href="https://psycnet.apa.org/record/1981-05421-001">Lord et al. 1979</a>, <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1540-5907.2006.00214.x">Taber and Lodge 2006</a>, <a href="https://www.jstor.org/stable/20620131">Kelly 2008</a>). Biased assimilation is more common when the evidence is ambiguous or hard to interpret (<a href="https://psycnet.apa.org/record/1994-25314-001">Chaiken and Maheswaran 1994</a>, <a href="http://www.communicationcache.comhttps://www.kevindorst.com/uploads/1/0/8/8/10887248/attitude_change_-_multiple_roles_for_persuasion_variables.pdf">Petty1998</a>).&nbsp; And the best known &ldquo;debasing&rdquo; technique is to explicitly instruct people to &ldquo;consider the opposite&rdquo;, i.e. to do cognitive searches that are expected to disconfirm their prior beliefs (<a href="https://media.gradebuddy.com/documents/1657138/1d2520cf-4a80-4d38-be44-2b4255bc0a46.pdf">Koriat 1980</a>, <a href="https://psycnet.apa.org/record/1985-12023-001">Lord et al. 1984</a>).<br /><br />If my explanation is right, this is, in effect, asking people to <em>not</em> let accuracy guide their choice of cognitive searches&mdash;and it therefore is no surprise that people do not do this spontaneously.<br /><br />In fact, it means that <em>we can prevent people from polarizing only by preventing them from trying to be accurate</em>.<br /><br /><br />What next?<br /><strong>The argument of this post draws heavily on</strong> a <a href="https://philpapers.org/rec/KELDDA">fantastic paper by Tom Kelly</a> about belief polarization. It&rsquo;s definitely worth reading, along with <a href="https://philpapers.org/rec/MCWEAB">Emily McWilliams&rsquo;s reply</a>.<br /><strong>Jess Whittlestone</strong> has a <a href="https://jesswhittlestone.com/blog/2018/1/10/reflections-on-confirmation-bias">fantastic blog post</a> summarizing <a href="http://wrap.warwick.ac.uk/95233/1/WRAP_Theses_Whittlestone_2017.pdfhttp://wrap.warwick.ac.uk/95233/1/WRAP_Theses_Whittlestone_2017.pdf">her dissertation on confirmation bias</a>&mdash;and how she completely changed her mind about the phenomenon.<br /><strong>For more details,</strong> as always, check out the <a href="https://www.kevindorst.com/reasonably-polarized-technical-appendix.html">technical appendix</a> (&sect;6).<br /><strong><a href="https://www.kevindorst.com/stranger_apologies/why-arguments-polarize-us">Next post:</a></strong> Why arguments polarize us.<br /></div>]]></content:encoded></item><item><title><![CDATA[Weighing the Risks (Guest Post)]]></title><link><![CDATA[https://www.kevindorst.com/stranger_apologies/weighing-the-risks-guest-post]]></link><comments><![CDATA[https://www.kevindorst.com/stranger_apologies/weighing-the-risks-guest-post#comments]]></comments><pubDate>Sat, 10 Oct 2020 12:41:07 GMT</pubDate><category><![CDATA[Framing effects]]></category><guid isPermaLink="false">https://www.kevindorst.com/stranger_apologies/weighing-the-risks-guest-post</guid><description><![CDATA[Reasonably Polarized&nbsp;will be back next week. In the meantime, here's&nbsp;a guest post on the rationality of framing effects, by Sarah Fisher (University of Reading), based on a&nbsp;forthcoming paper&nbsp;of hers that asks whether the "at least" reading of number terms can yield a rational explanation of framing effects.&nbsp; The paper recently&nbsp;won Cr&iacute;tica's essay competition&nbsp;on the theme of empirically informed philosophy&mdash;congrats Sarah!2300 words; 10 minute read.& [...] ]]></description><content:encoded><![CDATA[<div class="paragraph"><font color="#818181"><em>Reasonably Polarized</em>&nbsp;will be back next week. In the meantime, here's&nbsp;a guest post on the rationality of <strong><a href="https://en.wikipedia.org/wiki/Framing_effect_(psychology)" target="_blank">framing effects</a></strong>, by <strong><a href="https://sites.google.com/view/sarahafisher/home" target="_blank">Sarah Fisher</a></strong> (University of Reading), based on a&nbsp;<a href="https://drive.google.com/file/d/1SkaJ9LGdeoZ06zCL7vcCXjp12CZyx_zt/view">forthcoming paper</a>&nbsp;of hers that asks whether the "at least" reading of number terms can yield a rational explanation of framing effects.&nbsp; The paper recently&nbsp;<a href="http://www.filosoficas.unam.mx/sitio/critica-announce-outcome-essay-competition">won Cr&iacute;tica's essay competition</a>&nbsp;on the theme of empirically informed philosophy&mdash;congrats Sarah!</font><br /><font color="#818181">2300 words; 10 minute read.<br />&#8203;</font></div>  <div class="paragraph">As we learn to live in the &lsquo;new normal&rsquo;, amidst the easing and tightening of local and national lockdowns, day-to-day decision-making has become fraught with difficulty. Here are some of the questions I&rsquo;ve been grappling with lately:<ul style=""><li>When would be a good time to visit friends and family members?</li><li>Should I head out to the shop/ pub/ restaurant/ hairdresser this week?</li><li>Would it be more sensible to stay home?</li></ul> A year ago these questions had easier answers. I only needed to check in on my mood, bank balance, or general state of dishevelment. Now it&rsquo;s far harder to weigh up the costs and benefits of going out and about. There&rsquo;s so much more hanging on each decision. Is it worth taking the risk of picking up &ndash; or passing on &ndash; the virus? Of course, we will all settle on our own ways of balancing these concerns. But, in this post, I&rsquo;m going to look at how our attitudes to risk depend on <em>framing</em>.</div>  <div>  <!--BLOG_SUMMARY_END--></div>  <h2 class="wsite-content-title">&#8203;<strong>Risky-Choice Problems</strong></h2>  <div class="paragraph">Imagine you are offered the following choice:</div>  <blockquote>A. Receiving $\$$10 for sure.<br />&#8203;B. A 50% chance of receiving $\$$20 and a 50% chance of receiving $\$$0.</blockquote>  <div class="paragraph">Option (B) involves risk &ndash; if you choose it, you can&rsquo;t be sure of the outcome. Hence, this is an example of a &lsquo;risky-choice task&rsquo;. Have a think about which option you&rsquo;d prefer (and write it down if you&rsquo;re worried about <a href="https://en.wikipedia.org/wiki/Hindsight_bias" style="">hindsight bias</a>).<br /><span></span>Now consider the following choice:<br /><span></span></div>  <blockquote>C. Losing $\$$10 for sure<br />&#8203;D. A 50% chance of losing $\$$20 and a 50% chance of losing $\$$0.</blockquote>  <div class="paragraph">Which option would you prefer this time?<br /><br />Typically, people prefer option (A) in the first task and option (D) in the second.<br /><br />Why is this puzzling? Well, first notice that options A and B in the first task have the same &lsquo;expected value&rsquo; as each other. We can work out the expected value of gaining $\$$10 with a probability of 1 as follows:<ul><li>$\$$10 x 1 = $\$$<strong>10</strong></li></ul> Meanwhile, the expected value gaining $\$$20 with a probability of 0.5 and gaining $\$$0 with a probability of 0.5 can be worked out like this:<ul><li>($\$$20 x 0.5 = $\$$10) + ($\$$0 x 0.5 = $\$$0) = $\$$<strong>10</strong></li></ul> So, each option is worth 10 dollars.<br /><br />Why do people tend to prefer option A then? Presumably, because they prefer to have a sure thing than risk walking away with nothing. In other words, people are <em>risk-averse</em>.<br /><br />But now let&rsquo;s look at the second task. Again, options C and D have the same expected value. The expected value of losing $\$$10 with a probability of 1 can be worked out as follows:<ul><li>$-\$$10 x 1 = $-\$$<strong>10</strong></li></ul> And this is equal to losing $\$$20 with a probability of 0.5:<ul><li>($-\$$20 x 0.5 = $-\$$10) + ($\$$0 x 0.5 = $\$$0) = $-\$$<strong>10</strong></li></ul>In fact, the only difference between the two choice problems is that the first one deals in positives (<em>gaining </em>10 dollars) while the second deals in negatives (<em>losing </em>10 dollars).<br /><br />But in the second task, people tend to prefer option D. That indicates <em>risk-seeking </em>behaviour: other things equal, they prefer to take the risk of losing 20 dollars over the certainty of losing 10 dollars. So, it looks like people flip from being risk-averse when they are faced with <em>gains</em> (as in the first task), to being risk-seeking when they are faced with <em>losses</em> (as in the second task).<br /><br />This &lsquo;reflection effect&rsquo; was first brought to light by the psychologists Daniel Kahneman and Amos Tversky. They factor it into their &lsquo;prospect theory&rsquo; of decision-making under risk, which is designed to model how people <em>really </em>make decisions (not how they <em>should</em>!). You can read their seminal paper on prospect theory <a href="https://www.uzh.ch/cmsssl/suz/dam/jcr:00000000-64a0-5b1c-0000-00003b7ec704/10.05-kahneman-tversky-79.pdf">here</a>.<br /><br />The &lsquo;shifty&rsquo; nature of our risk attitudes is a fascinating topic in its own right. Why do we prefer sure gains but risky losses? For now, I&rsquo;m going to put that question to one side because I want to focus on a different one: Can options be made to <em>seem</em> like gains or losses just by framing them in particular ways?</div>  <div class="paragraph" style="text-align:right;"><font color="#818181">(1600 words left)</font></div>  <h2 class="wsite-content-title"><strong>Risky-Choice Framing</strong></h2>  <div class="paragraph">The following scenario is adapted from papers by David Mandel, published in <a href="https://www.sciencedirect.com/science/article/abs/pii/S0749597800929327">2001</a> and <a href="https://doi.apa.org/doiLanding?doi=10.1037%2Fa0034207" target="_blank">2014</a> (and it is itself inspired by a classic scenario introduced by Tversky and Kahneman <a href="https://science.sciencemag.org/content/211/4481/453">here</a>).</div>  <blockquote>In a war-torn region, the lives of 600 stranded people are at stake. Two response plans with the following outcomes have been proposed. Assume that the estimates provided are accurate.<br /><br />&#8203;If Plan A is adopted, it is certain that 200 people will be saved.<br /><br />If Plan B is adopted, there is a one-third probability that all 600 will be saved and a two-thirds probability that nobody will be saved.<br /><br />Which of the two plans would you choose &ndash; A or B?<br /><span></span></blockquote>  <div class="paragraph">Now take a look at this slightly different version:<br /><span></span></div>  <blockquote>In a war-torn region, the lives of 600 stranded people are at stake. Two response plans with the following outcomes have been proposed. Assume that the estimates provided are accurate.<br /><br />If Plan C is adopted, it is certain that 400 people will die.<br /><br />If Plan D is adopted, there is a two-thirds probability that all 600 will die and a one-third probability that nobody will die.<br /><br />&#8203;Which of the two plans would you choose &ndash; C or D?<br /><span></span></blockquote>  <div class="paragraph">In the first task, people tend to prefer the sure option, Plan A. But, in the second task they tend to prefer the risky option, Plan D. Perhaps you did as well.<br /><br />The pattern is reminiscent of our earlier pair of choice tasks. But there&rsquo;s an important difference: the gains and losses are only <em style="">apparent </em>now. Plan A and Plan C are supposed to have exactly the same outcome, namely 200 people being saved and 400 people dying. Plan B and Plan D are also supposed to be equivalent: both involve a one-third probability of everyone being saved (i.e. nobody dying) and a two-thirds probability of nobody being saved (i.e. everyone dying). So, it&rsquo;s not as though the first version of the task involves actual gains while the second one involves actual losses. Even if it&rsquo;s true that we&rsquo;re risk-averse for gains and risk-seeking for losses, that can&rsquo;t completely explain the responses here. Cases like this are especially puzzling.<br /><br />Tversky and Kahneman put forward a solution. They point out that in the first version of the choice task, positive language is used: the options are described in terms of the number of people who will &lsquo;be saved&rsquo;. This, they suggest, makes options A and B <em style="">sound like </em>gains, even though some people could die.<br /><br />In contrast, the second version of the choice task uses negative language, talking about the number of people who will &lsquo;die&rsquo;. This makes options C and D <em style="">sound like</em> losses, even though some people could be saved.<br /><br />Once we&rsquo;ve interpreted the options as gains or losses, prospect theory predicts what we&rsquo;ll do next. On the one hand, since we tend to be risk-<em style="">averse</em> when we think we&rsquo;re facing gains, we&rsquo;ll choose Plan A in the first version of the task. On the other hand, since we tend to be risk-<em style="">seeking</em> when we think we&rsquo;re facing losses, we&rsquo;ll choose Plan D in the second version. And, as we saw, that&rsquo;s precisely the pattern the psychologists find.<br /><br />As an aside, I wonder whether the British and Irish governments had this effect in mind early on in the COVID-19 outbreak. In a <a href="https://twitter.com/billybragg/status/1238467261213683712" style="">tweet from 13th March</a>, Billy Bragg wryly comments on their contrasting communication strategies. Whereas Johnson was using a negative frame, warning that many people would die, Varadkar chose a positive frame, claiming that many could be saved. If Tversky and Kahneman are right, Varadkar&rsquo;s positive framing could have encouraged a relatively cautious response, which would be consistent with Ireland&rsquo;s swift lockdown. Meanwhile, Johnson could have been encouraging Brits to take a risker approach (herd immunity&hellip;?).<br /><br />This particular case aside, prospect theory has been extremely influential in academia, industry and popular culture. And &lsquo;risky-choice framing effects&rsquo; are commonly thought to involve a double dose of irrationality: first, the superficial differences in language affect how we <em style="">perceive</em> the options facing us (although see my <a href="https://www.kevindorst.com/stranger_apologies/a-glass-half-full" style="">last blog post</a> for a rationalising explanation of the effects of positive and negative frames); second, perceiving options as gains or losses affects which one we <em style="">prefer</em>.<br /><br />But Tversky and Kahneman&rsquo;s account isn&rsquo;t universally accepted. And one way of challenging it is by questioning the equivalence of the options in each version of a risky-choice task. So, in the above example, is 200 people being saved really the same as 400 people dying? Not necessarily&hellip;<br /><span></span></div>  <div class="paragraph" style="text-align:right;"><font color="#818181">(800 words left)</font></div>  <h2 class="wsite-content-title"><strong>A Challenge</strong></h2>  <div class="paragraph"><font color="#2a2a2a"><a href="https://doi.apa.org/doiLanding?doi=10.1037%2Fa0034207" target="_blank">This study</a> investigates a different explanation of risky-choice framing effects. The author, David Mandel, suggests that many people interpret number terms like &lsquo;200&rsquo; and &lsquo;400&rsquo; as having &lsquo;at least&rsquo; meanings. In fact, this possibility is well-recognised by linguists and philosophers of language. For instance, when we are instructed to keep two metres apart, this is clearly a minimum social distancing measure &ndash; even better if it&rsquo;s three or four metres.<br /><br />So, saying that &lsquo;200 people will be saved&rsquo; may be interpreted as meaning that <em>at least </em>200 people will be saved. That leaves open the (good!) possibility that more than 200 people could be saved under Plan A (i.e. less than 400 would die).<br /><br />On the flipside, saying that &lsquo;400 people will die&rsquo; may be interpreted as meaning that <em>at least</em> 400 people will die. That leaves open the (bad!) possibility that more than 400 could die (i.e. less than 200 would be saved). So, strictly speaking, Plan A is a <em>better </em>prospect than Plan C. Perhaps that could be enough to explain why people prefer Plan A in the first version of the problem and Plan D in the second version. Then we needn&rsquo;t conclude that they have inconsistent attitudes to risk.<br /><br />Mandel conducted a series of experiments. In summary:</font><ul><li>When he added &lsquo;exactly&rsquo; before the number expression (as in &lsquo;exactly 200 people will be saved&rsquo;, and &lsquo;exactly 400 people will die&rsquo;), there was no statistically significant framing effect. In other words, people no longer reliably switched between the sure option and the risky option across the two versions of the problem.</li><li>When he added &lsquo;at least&rsquo; before the number expression (as in &lsquo;at least 200 people will be saved&rsquo;, and &lsquo;at least 400 people will die&rsquo;), there was a particularly <em>large</em> framing effect. So, by making the &lsquo;at least&rsquo; explicit, the Plan A/ Plan D response pattern was exacerbated.</li><li>Finally, when he asked people how they interpreted the (unmodified) number terms &lsquo;200&rsquo; and &lsquo;400&rsquo;, only the people who spontaneously adopted &lsquo;at least&rsquo; readings were subject to framing effects.</li></ul><br />These are really striking results. It looks like risky-choice framing effects could be all down to how people interpret number terms. So&hellip;case closed? Well, not quite. Two attempts to confirm the results of one of Mandel&rsquo;s experiments failed to eliminate framing effects (see <a href="http://datacolada.org/11">here</a> and <a href="https://psycnet.apa.org/doiLanding?doi=10.1037%2Fxlm0000158" target="_blank">here</a>, with Mandel&rsquo;s reply to the first of these <a href="https://psyarxiv.com/34jeg/">here</a>). In both cases, risky-choice framing effects arose even when number terms like &lsquo;200&rsquo; and &lsquo;400&rsquo; were being understood <em>exactly</em>.<br /><br />&#8203;It seems unlikely, then, that risky-choice framing effects are <em>entirely</em> explained by &lsquo;at least&rsquo; interpretations of number terms (and this is a point which Mandel himself is careful to make &ndash; in separate work, like <a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/bdm.1863">this paper</a> written with Michael Tombu, he has developed another proposal which could explain the remainder of the effect). Still, I think these &lsquo;lower-bounded&rsquo; interpretations of number terms are an important contributing factor (and I argue in this <a href="https://drive.google.com/file/d/1njAGwqypd4eUm25hW2ksG59z5V-fQPII/view">draft paper</a> that Mandel&rsquo;s critics should think so too). That&rsquo;s worth noting because it could allow us to rationalise at least <em>some</em> of people&rsquo;s risky-choice behaviour.<br /><br />How does this relate to communications and decision-making during the global pandemic? One useful takeaway is that numbers can be understood in different ways &ndash; as &lsquo;exact&rsquo; or &lsquo;at least&rsquo; (and sometimes &lsquo;at most&rsquo;, as when locked down Brits were allowed to take &lsquo;one&rsquo; outdoor excursion a day). So, when we hear politicians and scientists predicting COVID-19 case numbers or death tolls, it&rsquo;s worth reflecting on whether these are these are supposed to be point estimates, bare minimums, or upper limits.<br /><br />In practice, the high degree of uncertainty in the current climate often makes it hard to put numbers on outcomes at all. And that may make the &lsquo;lower-bounding hypothesis&rsquo; described above less relevant to our ordinary day-to-day risk judgements. (However, government decision-makers are far more likely to be presented with quantified outcomes and probabilities, so the findings may be more applicable at that level). Some important questions we are left with, then, are:<ul><li>What other factors contribute to the risky-choice framing effects we are subject to on a day-to-day basis?</li><li>Are Kahneman and Tversky right to focus on our perception of options as gains or losses?</li><li>And is it rational to be sensitive to such factors?</li></ul><br />To sum up, I think the jury is still out on whether risky-choice framing effects can be <em>entirely</em> rationalised. Still, we shouldn&rsquo;t be too quick to conclude that they can&rsquo;t. And, in the meantime, we can try to notice when framing is affecting us &ndash; perhaps even using that knowledge to our advantage. Recently, I&rsquo;ve found it particularly useful to reframe my choices, to see how that affects my attitudes and preferences. For instance, it may be true that staying home may offer predictable monotony, compared with the chance of greater enjoyment, but it also offers safety over the risk of infection. Perhaps trying out both perspectives can help challenge our own default ways of thinking &ndash; and our understanding of each other&rsquo;s.</div>  <div class="paragraph"><br />&#8203;<br />What next?<br /><strong>If you&rsquo;d like to find out more about the different theories of framing effects</strong>, <a href="https://onlinelibrary.wiley.com/doi/10.1002/9781118468333.ch20">this survey chapter by Karl Halvor Teigen</a> is a good place to start.<br /><strong>For more on the philosophical interpretation of the lower-bounding hypothesis</strong>, check out Sarah's&nbsp;<a href="https://drive.google.com/file/d/1SkaJ9LGdeoZ06zCL7vcCXjp12CZyx_zt/view">forthcoming paper</a>&nbsp;on the topic.<br /></div>]]></content:encoded></item><item><title><![CDATA[Rational Polarization Can Be Profound, Persistent, and Predictable]]></title><link><![CDATA[https://www.kevindorst.com/stranger_apologies/rational-polarization-can-be-profound-persistent-and-predictable]]></link><comments><![CDATA[https://www.kevindorst.com/stranger_apologies/rational-polarization-can-be-profound-persistent-and-predictable#comments]]></comments><pubDate>Sat, 03 Oct 2020 04:00:00 GMT</pubDate><category><![CDATA[Polarization]]></category><category><![CDATA[Reasonably Polarized series]]></category><guid isPermaLink="false">https://www.kevindorst.com/stranger_apologies/rational-polarization-can-be-profound-persistent-and-predictable</guid><description><![CDATA[(2000 words; 9 minute read.)  So far, I&rsquo;ve laid the foundations for a story of rational polarization.&nbsp;I&rsquo;ve argued that we have reason to explain polarization through rational mechanisms; showed that ambiguous evidence is necessary&nbsp;to do so; and described an experiment illustrating this possibility.Today, I&rsquo;ll conclude the core theoretical argument. I'll give an ambiguous-evidence model of our experiment that both (1) explains the predictable polarization it induces, a [...] ]]></description><content:encoded><![CDATA[<div class="paragraph"><font color="#818181">(2000 words; 9 minute read.)</font></div>  <div class="paragraph">So far, I&rsquo;ve laid the foundations for a story of rational polarization.<span>&nbsp;</span>I&rsquo;ve argued that we have reason to <a href="https://www.kevindorst.com/stranger_apologies/rp" target="_blank">explain polarization through rational mechanisms</a>; showed that <a href="https://www.kevindorst.com/stranger_apologies/what-is-rational-polarization" target="_blank">ambiguous evidence is necessary</a>&nbsp;to do so; and <a href="https://www.kevindorst.com/stranger_apologies/how-to-polarize-rational-people" target="_blank">described an experiment</a> illustrating this possibility.<br /><br />Today, I&rsquo;ll conclude the core theoretical argument. I'll give an ambiguous-evidence model of our experiment that both (1) explains the predictable polarization it induces, and (2) shows that such<span>&nbsp;</span>polarization can in principle be <strong>profound </strong>(both sides end up disagreeing massively) and <strong>persistent</strong>&nbsp;(neither side is changes their opinion when they discover that they disagree).<br /><br />With this final piece of the theory in place, we&rsquo;ll be able to apply it to the <a href="https://www.kevindorst.com/stranger_apologies/how-we-polarized" target="_blank">empirical mechanisms that drive polarization</a>, and see how the polarizing effects of persuasion, confirmation bias, motivated reasoning, and so on, can all be rationalized by ambiguous evidence.<br /><br />Recall our <a href="https://www.kevindorst.com/stranger_apologies/how-to-polarize-rational-people" target="_blank">polarization experiment</a>.<span>&nbsp;</span></div>  <div>  <!--BLOG_SUMMARY_END--></div>  <div class="paragraph">I flipped a coin, and then showed you a <strong><em>word-completion task</em></strong>: a series of letters and blanks that may or may not be completable by an English word.<span>&nbsp; </span>For example, FO_E_T <em>is</em> completable (hint: where are there lots of trees?); but _AL_W is not (alas, <a href="http://users.ox.ac.uk/~shug2406/">Bernhard Salow</a> is not yet sufficiently famous).<br />&#8203;<br />One group&mdash;the Headsers&mdash;saw a completable string if the coin landed heads; the other group&mdash;the Tailsers&mdash;saw a completable string if the coin landed tails. As they did this for more and more tasks, their average confidence that the coins landed heads diverged more and more:</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/9-12-amb-hser-tser_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">Question: what drives this polarization&mdash;and why think that it is <em>rational</em>?<br /><br />Our word-completion task provides an instance of what is sometimes called a &ldquo;good-case/bad-case asymmetry&rdquo; in epistemology (<a href="https://philpapers.org/rec/WILKAI/" target="_blank">Williamson 2000</a>, <a href="https://philpapers.org/rec/LASNRR" target="_blank">Lasonen-Aarnio 2015</a>, <a href="https://philpapers.org/rec/SALTEG" target="_blank">Salow 2018</a>).<span>&nbsp;</span>The asymmetry is that you get better (less ambiguous) evidence in the &ldquo;good&rdquo; case than in the &ldquo;bad&rdquo; case&mdash;and, therefore, it&rsquo;s easier to recognize that you&rsquo;re in the good case (when you are) than to recognize that you&rsquo;re in the bad case (when you are).<br /><br />In our experiment, the &ldquo;good case&rdquo; is when the letter-string is completable; the &ldquo;bad case&rdquo; is when it&rsquo;s not.<span>&nbsp;</span>The crucial fact is that it&rsquo;s easier to recognize that a string is completable than to recognize that it&rsquo;s not.&nbsp; It&rsquo;s possible to get unambiguous evidence that the letter-string <em>is</em> completable (all you have to do is find a word). But it&rsquo;s <em>im</em>possible to get unambiguous evidence that it&rsquo;s <em>not</em> completable.<br /><br />In particular, what should you think when you don&rsquo;t find a word?<span>&nbsp; </span>This is some evidence that the string is not completable&mdash;but how much?<span>&nbsp;</span>After all, you can&rsquo;t rule out the possibility that you <em>should</em> find a word, or that you should at least have an inkling that there&rsquo;s one.<span>&nbsp;</span>More generally, you should be unsure what to make of this evidence: if there <em>is </em>a word, you have more evidence that there is; if there&rsquo;s not, you have less; but you can&rsquo;t be sure of which possibility you&rsquo;re in.<br /><br />There are a variety of models we can give of your evidence to capture this idea, all of which satisfy the <a href="https://www.kevindorst.com/stranger_apologies/what-is-rational-polarization" target="_blank">value of evidence</a>&nbsp;and yet lead to predictable polarization (see the <a href="https://www.kevindorst.com/reasonably-polarized-technical-appendix.html" target="_blank">Technical Appendix</a>&nbsp;(&sect;5.1) for some variations).<br /><br />Here's a simple one:</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/10-3-graded-wc-task.png?1601622718" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">Either there is a word, or there&rsquo;s not; and either you find one, or you don&rsquo;t&mdash;but you can&rsquo;t find a word that&rsquo;s not there, so there are only 3 types of possibilities (the circles in the diagram).<br /><br />The numbers inside the circles indicate how confident you should be, beforehand, that you&rsquo;ll end up in those possibilities: you should be &frac12; confident that there&rsquo;ll be no word (and you won&rsquo;t find one), since that&rsquo;s determined by a coin flip; and if there <em>is</em> a word, there&rsquo;s some chance (say, &frac12;) you&rsquo;ll find one, meaning you should be &frac12;*&frac12; = &frac14; confident you&rsquo;ll end up in each of the <em>Word-and-Find</em>&nbsp;(top right) and <em>Word-and-Don&rsquo;t-Find&nbsp;</em>(bottom right)&nbsp;possibilities.<br /><br />Meanwhile, the labeled arrows <em>from</em> possibilities represent how confident you should be&nbsp;<em>after</em> you see the task, if in fact you&rsquo;re in that possibility.<span>&nbsp; </span><br /><br />If there's a word and you find one, you should be sure of that&mdash;hence the arrow labeled &ldquo;1&rdquo; pointing from the top-right possibility to itself.<span>&nbsp;</span>If there&rsquo;s no word and you don&rsquo;t find one, you should be somewhat confident of that (say &#8532; probability), but you should leave open that there&rsquo;s a word that you didn&rsquo;t find (say, &#8531; probability).<span>&nbsp; </span>But if there <em>is</em> a word and you don&rsquo;t find one, you should be more confident than that&mdash;after all, since there is a word, you&rsquo;ve received more evidence that there is, even if that evidence is hard to recognize.<span>&nbsp;</span>Any higher number will do, but in this model you should be &#8532; confident there&rsquo;s a word if there is one and you don't find one.<br /><br />If you don&rsquo;t find a word, your evidence is <em>ambiguous</em> because you should be unsure how confident you should be that there&rsquo;s a word&mdash;maybe you should be &#8531; confident; maybe instead you should be &#8532; confident.<span>&nbsp;</span>(In a realistic model there would be many more possibilities, but this simple one illustrates the structural point.)<br /><br />There are two important facts about this model: (1) it is predictable polarizing, and yet (2) it satisfies the value of evidence.<br /><br /><strong>Why is the evidence predictably polarizing?</strong><span>&nbsp;</span>You start out &frac12; confident there&rsquo;ll be a word. But you prior <em>estimate</em> for how confident you should end up is <em>higher</em> than &frac12;. After all, there&rsquo;s a &frac12; chance your confidence should go up&mdash;perhaps way up, perhaps only somewhat up. Meanwhile, there&rsquo;s a &frac12; chance it should go down&mdash;but not very far down.<span>&nbsp; </span>Thus, on average, you expect seeing word-completion tasks to provide evidence that there&rsquo;s a word.<br /><br />(Precisely: your prior expectation of the posterior rational confidence is &frac12;*&#8531;<span>&nbsp;</span>+ &frac14;*&#8532; + &frac14;*1 = 7/12, which is greater than &frac12;.)<br /><br />Notice that if you had <em>un</em>ambiguous evidence&mdash;so that the rational confidence was the same at all possibilities wherein you don&rsquo;t find a word&mdash;this model would <em>not</em> be predictably polarizing. (Then your prior expectation would be &frac34;*&#8531; + &frac14;*1 = &frac12;.)<br /><br />So what drives the predicable polarization is the ambiguity&mdash;in particular, the fact that when you don&rsquo;t find a word, you should be more confident the string is completable if there is a word than if there&rsquo;s not.<br /><br />This, incidentally, is empirically confirmed: <a href="https://www.kevindorst.com/stranger_apologies/how-to-polarize-rational-people" target="_blank">in my experiment</a>, amongst tasks in which people didn&rsquo;t find a word (had confidence &lt;100%), the average confidence when there was <em>no</em> word was 44.6%, while the average confidence when there <em>was</em> a word was 52.3%&mdash;a statistically significant difference. (Stats: t(309) = 2.77, one-sided p = 0.003, and d=0.32.)<br /><br /><strong>Why is the evidence valuable, despite being polarizing?</strong> Note that the rational posterior degrees of confidence are <em>uniformly more accurate</em>&nbsp;than the prior rational confidence: no matter what possibility is actual, you become uniformly more confident of truths and less confident of falsehoods.<span>&nbsp;</span><br /><br />This can be seen by noting that, in each possibility, the probabilities always become more centered on the actual possibility. For example, suppose there&rsquo;s no word. Then initially you should be &frac12; confident of this, and afterwards you should be &#8532; confident of it.<span>&nbsp;</span>Conversely, suppose there <em>is</em> a word. Then initially you should be &frac12; confident of this, but afterwards you should be either &#8532; confident of it (bottom right), or certain of it (top right). And so on.<br /><br />Because of this, the model satisfies the value of evidence: no matter what decision about the word-completion task you might face, you should prefer to get the evidence before making your decision. (Proof in the <a href="https://www.kevindorst.com/reasonably-polarized-technical-appendix.html" target="_blank">Technical Appendix</a>, &sect;5.1.)</div>  <div class="paragraph" style="text-align:right;"><font color="#818181">(700 words left)</font></div>  <h2 class="wsite-content-title"><u><strong>Profound, Persistent Polarization</strong></u></h2>  <div class="paragraph">How, in principle, could this type of evidence lead to <em>profound</em> and <em>persistent</em> polarization?<br /><br />First note what happens when we divide people into Headsers and Tailsers: we give them symmetric, mirror-image types of evidence.<span>&nbsp;</span>Headsers see completable strings when the coin lands heads; Tailsers see them when it lands tails. Thus Headsers&nbsp;tend to get less ambiguous evidence when the coin lands heads, while Tailsers tend to get less ambiguous evidence when the coin lands tails:</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/10-3-graded-hser-tser_orig.png" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">As a result, Headsers should become (on average) more confident in heads, while Tailsers should become (on average) more confident of tails.<br /><br />&#8203;(Precisely: although both start out &frac12; confident of heads, on average Headsers should be 1/6 more confident of heads than Tailsers should be.)<br /><br />Now consider: what happens if we present each group with a large number of independent word-completion tasks.<span>&nbsp; </span>(For simplicity, imagine they all know that they&rsquo;re 50% likely to find a word if there is one, so they don&rsquo;t learn anything new about their abilities as they proceed.)<br /><br />Each time they&rsquo;re presented with a word-completion task, they face a question: &ldquo;On this toss, will I find a word, and will the coin land heads or tails?&rdquo;<span>&nbsp; </span>Since the coin tosses are each fair and independent, the answer to all of these questions are independent: knowing the answers to some of them has no bearing on the others. Moreover, we&rsquo;ve seen that with respect to each one of these questions, the evidence is valuable. <br /><br /><strong>In fact, more is true.</strong><span>&nbsp; Let $Q$ be the question "How will each of the coins land?" By iterating this process in the right way, we can make it such that at each stage $i$&nbsp;of the process, you should expect that the evidence you'll receive about coin $i+1$ is valuable with respect to $Q$.&nbsp;&nbsp;</span><span style="color:rgb(42, 42, 42)">(This, mind you, is the most subtle philosophical and technical step&mdash;see the&nbsp;</span><a href="https://www.kevindorst.com/reasonably-polarized-technical-appendix.html" target="_blank">Technical Appendix</a><span style="color:rgb(42, 42, 42)">, &sect;5.2, for more discussion.)</span><br /><br /><span>Thus at each time, if what you care about is getting to the truth about how any of the coins landed, you should gather the evidence.<br />&#8203;</span><br />Suppose Headsers and Tailser both do this.<span>&nbsp; </span>Then it will predictably lead to profound and persistent disagreement.<br /><br />Why?<span>&nbsp; </span>By the weak <a href="https://en.wikipedia.org/wiki/Law_of_large_numbers" target="_blank">law of large numbers</a>, everyone can predict with confidence that Headsers should wind up very confident that around 7/12 (&asymp;58%) of the coins landed heads, while Tailsers should wind up very confident that around 5/12 (&asymp;42%) of the coins did.<br /><br />Now consider the claim:</div>  <blockquote>&nbsp;&nbsp; &nbsp;<strong><em>Mostly-Heads</em>:</strong> more than 50% of the coins landed heads.</blockquote>  <div class="paragraph">Everyone can predict, at the outset, that Headsers will become very confident (in fact, with enough tosses, arbitrarily confident) that <em>Mostly-Heads</em> is true, and Tailsers will become very confident it&rsquo;s false.<br /><br />&#8203;&#8203;Thus we have&nbsp;<strong>profound </strong>polarization.<br /><br />Moreover, even after undergoing this polarization, Headsers will still be very confident that Tailsers will be very confident that <em>Mostly-Heads</em> is false; meanwhile, Tailsers will be very confident that Headsers will be very confident that <em>Mostly-Heads</em> is true. As a result, neither group will be surprised&mdash;and thus neither group will be <em>moved</em>&mdash;when they discover their disagreement.<br /><br />Thus we have&nbsp;<strong>persistent</strong> polarization.<br /><br />In short: the ambiguity-asymmetries induced by the sort of evidence presented in word-completion tasks can be used to lead rational people to be predictably, profoundly, and persistently polarized. (See the <a href="https://www.kevindorst.com/reasonably-polarized-technical-appendix.html" target="_blank">Technical Appendix</a>, &sect;5.2, for the formal argument.)<br /><br />This completes the theoretical argument of this series: <strong>the type of polarization we see in politics&mdash;polarization that is <a href="https://www.kevindorst.com/stranger_apologies/rp" target="_blank">predictable, profound, and persistent</a>--<em>could</em> be rational.</strong><br /><br /><strong>The rest of the series will make the case that it <em>is</em> rational.</strong>&nbsp;In particular, I&rsquo;ll argue that this ambiguity-asymmetry mechanism plausibly helps explain the empirical mechanisms that drive polarization: persuasion, confirmation bias, motivated reasoning, etc.<br /><br />It&rsquo;s not hard to see, in outline, how the story will go.<br /><br />For &ldquo;heads&rdquo; and &ldquo;tails&rdquo; substitute bits of evidence for and against a politically contentious claim&mdash;say, that racism is systemic. Recall how Becca and I <a href="https://www.kevindorst.com/stranger_apologies/rp" target="_blank">went our separate ways in 2010</a>&mdash;I, to a liberal university; she, to a conservative college.<br /><br />I, in effect, became a Headser: I was exposed to information in a way that made it easier to recognize evidence in favor of systemic racism. She, in effect, became a Tailser: she was exposed to information in a way that made it easier to recognize evidence <em>against</em> systemic racism.<br /><br />If that were what happened, then <em>both of us could've predicted</em> that we would end up profoundly polarized&mdash;as we did. And <em>neither of us should be moved now</em> when we come back and discover our massive disagreements&mdash;as we&rsquo;re not.<br /><br />And yet: although we each should think that the other is wrong, we should not think that they are less rational, or smart, or balanced than we ourselves are.<br /><br />That is the schematic story of how our polarized politics could have resulted from rational causes.<br /><br />In the remainder of this series, I&rsquo;ll argue that it has.<br /><br /><br />What next?<br /><strong style="color:rgb(42, 42, 42)">If you liked this post,</strong><span style="color:rgb(42, 42, 42)">&nbsp;consider&nbsp;</span><a href="https://mailchi.mp/279517050568/stranger_apologies_signup">signing up</a><span style="color:rgb(42, 42, 42)">&nbsp;for the newsletter,&nbsp;</span><a href="https://twitter.com/kevin_dorst">following me on Twitter</a><span style="color:rgb(42, 42, 42)">, or&nbsp;</span><a href="https://twitter.com/kevin_dorst/status/1302251551587790848">spreading the word</a><span style="color:rgb(42, 42, 42)">.<br /><strong>For the formal details</strong> underlying the argument, see the <a href="https://www.kevindorst.com/reasonably-polarized-technical-appendix.html" target="_blank">Technical Appendix</a>&nbsp;(&sect;5).<br /><strong><a href="https://www.kevindorst.com/stranger_apologies/confirmation-bias-as-avoiding-ambiguity" target="_blank">Next post</a>:</strong>&nbsp;How confirmation bias results from rationally avoiding ambiguity.</span></div>]]></content:encoded></item><item><title><![CDATA[What is "Rational" Polarization?]]></title><link><![CDATA[https://www.kevindorst.com/stranger_apologies/what-is-rational-polarization]]></link><comments><![CDATA[https://www.kevindorst.com/stranger_apologies/what-is-rational-polarization#comments]]></comments><pubDate>Fri, 25 Sep 2020 09:16:39 GMT</pubDate><category><![CDATA[Polarization]]></category><category><![CDATA[Reasonably Polarized series]]></category><guid isPermaLink="false">https://www.kevindorst.com/stranger_apologies/what-is-rational-polarization</guid><description><![CDATA[(2300 words, 12 minute read.)So far, I've (1) argued that we need a rational&nbsp;explanation of polarization, (2) described an experiment showing how in principle we could give one, and (3) suggested that this explanation can be applied to the psychological&nbsp;mechanisms that drive polarization.Over the next two weeks, I'll put these normative claims on a firm theoretical foundation. Today&nbsp;I'll explain why ambiguous evidence is both necessary and sufficient for predictable polarization t [...] ]]></description><content:encoded><![CDATA[<div class="paragraph"><font color="#a1a1a1">(2300 words, 12 minute read.)</font><br /><br />So far, I've (1) argued that <a href="https://www.kevindorst.com/stranger_apologies/rp" target="_blank">we need a <em>rational</em>&nbsp;explanation of polarization</a>, (2) described an <a href="https://www.kevindorst.com/stranger_apologies/how-to-polarize-rational-people" target="_blank">experiment showing how in principle we could give one</a>, and (3) suggested that this explanation can be applied to the psychological&nbsp;<a href="https://www.kevindorst.com/stranger_apologies/how-we-polarized" target="_blank">mechanisms that drive polarization</a>.<br /><br />Over the next two weeks, I'll put these normative claims on a firm theoretical foundation. <strong>Today</strong>&nbsp;I'll explain why ambiguous evidence is both necessary and sufficient for predictable polarization to be rational. <strong>Next week</strong>&nbsp;I'll use this theory to explain our experimental results and show how predictable, profound, persistent polarization can emerge from rational processes.<br /><br />With those theoretical tools in place, we'll be in a position to use them to explain the psychological mechanisms that in fact drive polarization.<br /><br />&#8203;So: what do I mean by "rational" polarization; and why is "ambiguous" evidence the key?<br /><span style="color:rgb(42, 42, 42)"><br />It&rsquo;s standard to distinguish&nbsp;</span><em style="color:rgb(42, 42, 42)">practical</em><span style="color:rgb(42, 42, 42)">&nbsp;from&nbsp;</span><em style="color:rgb(42, 42, 42)">epistemic</em><span style="color:rgb(42, 42, 42)">&nbsp;rationality.&nbsp;Practical rationality is doing the best that you can to fulfill your goals, given the options available to you.&nbsp; Epistemic rationality is doing the best that you can to believe the truth, given the evidence available to you.</span><br /><br /><span style="color:rgb(42, 42, 42)">It&rsquo;s&nbsp;</span><em style="color:rgb(42, 42, 42)">practically</em><span style="color:rgb(42, 42, 42)">&nbsp;rational to believe that climate change is a hoax if you know that doing otherwise will lead you to be ostracized by your friends and family.&nbsp;It&rsquo;s not&nbsp;</span><em style="color:rgb(42, 42, 42)">epistemically</em><span style="color:rgb(42, 42, 42)">&nbsp;rational to do so unless your evidence&mdash;including the opinions of those you trust&mdash;makes it likely that climate change is a hoax.</span><br /><br /><span style="color:rgb(42, 42, 42)">My claim is about epistemic rationality, not practical rationality. Given how important our political beliefs are to our social identities, it&rsquo;s not surprising that it&rsquo;s&nbsp;</span><em style="color:rgb(42, 42, 42)">in our interest</em><span style="color:rgb(42, 42, 42)">&nbsp;to have liberal beliefs if our friends are liberal, and to have conservative beliefs if our friends are conservative. Thus is should be uncontroversial that the mechanisms that drive polarization can be&nbsp;</span><em style="color:rgb(42, 42, 42)">practically</em><span style="color:rgb(42, 42, 42)">&nbsp;rational&mdash;as people like&nbsp;</span><a href="https://en.wikipedia.org/wiki/Why_We%27re_Polarized" target="_blank">Ezra Klein</a><span style="color:rgb(42, 42, 42)">&nbsp;and&nbsp;</span><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2319992" target="_blank">Dan Kahan</a><span style="color:rgb(42, 42, 42)">&nbsp;claim.</span><br /><br /><strong style="color:rgb(42, 42, 42)">The more surprising claim</strong><span style="color:rgb(42, 42, 42)">&nbsp;I want to defend is that ambiguities in political evidence make it so that liberals and conservatives&nbsp;</span><em style="color:rgb(42, 42, 42)">who are doing the best they can to believe the truth</em><span style="color:rgb(42, 42, 42)">&nbsp;will tend to become more confident in their opposing beliefs.</span><br /><br /><span style="color:rgb(42, 42, 42)">To defend&nbsp;</span><em style="color:rgb(42, 42, 42)">this</em><span style="color:rgb(42, 42, 42)">&nbsp;claim, we need concrete theory of epistemic rationality.</span><br /></div>  <div>  <!--BLOG_SUMMARY_END--></div>  <div class="paragraph"></div>  <h2 class="wsite-content-title"><u style="color:rgb(42, 42, 42)"><strong>The Standard Theory</strong></u></h2>  <div class="paragraph"><span style="color:rgb(42, 42, 42)">The standard theory is what we can call&nbsp;</span><strong style="color:rgb(42, 42, 42)">unambiguous Bayesianism</strong><span style="color:rgb(42, 42, 42)">. It says that the rational degrees of confidence at a time can always be represented with a single probability distribution, and that new evidence is always&nbsp;</span><em style="color:rgb(42, 42, 42)">unambiguous</em><span style="color:rgb(42, 42, 42)">, in the sense that you can always know exactly how confident to be in light of that evidence.&nbsp;</span><br /><span style="color:rgb(42, 42, 42)">&#8203;</span><br /><span style="color:rgb(42, 42, 42)">Simple example: suppose there&rsquo;s a fair lottery with 10 tickets.&nbsp;You hold 3 of them, Beth holds 2, and Charlie holds 5.&nbsp; Given that information, how confident should you be in the various outcomes? That's easy: you should be 30% confident you&rsquo;ll win, 20% confident Beth will, and 50% confident Charlie will.</span><br /><br /><span style="color:rgb(42, 42, 42)">Now suppose I give you some unambiguous evidence: I tell you whether or not Charlie won.&nbsp;Again, you&rsquo;ll know exactly what to do with this information: if I tell you he won, you know you should be 100% confident he won; if I tell you he lost, that means there are 5 tickets remaining, 3 of which belong to you&mdash;so you should be 3/5 = 60% confident that you won and 40% confident that Beth did.</span><br /><br /><span style="color:rgb(42, 42, 42)">In effect, unambiguous Bayesianism assimilates every case of information-gain to a situation like our lottery, wherein you always know what probabilities to have both before and after the evidence comes in.</span><br /><br /><span style="color:rgb(42, 42, 42)">This has a surprising consequence:</span></div>  <blockquote><font size="4"><strong style="color:rgb(42, 42, 42)">Fact 1.</strong><span style="color:rgb(42, 42, 42)">&nbsp;Unambiguous Bayesianism implies that, no matter what evidence you might get, predictable polarization is always irrational.</span><span style="color:rgb(42, 42, 42)">&nbsp;&nbsp;</span></font></blockquote>  <div class="paragraph"><span style="color:rgb(42, 42, 42)">(&#8203;</span><span style="color:rgb(42, 42, 42)">The&nbsp;<a href="https://www.kevindorst.com/reasonably-polarized-technical-appendix.html" target="_blank">Technical Appendix</a>&nbsp;contains all formal statements and proofs.)<br /><br />In particular, consider me back in 2010, thinking about the political attitudes I&rsquo;d have in 2020.&nbsp;</span><span style="color:rgb(42, 42, 42)">Unambiguous Bayesianism implies that&nbsp;</span><em style="color:rgb(42, 42, 42)">no matter what evidence I might get</em><span style="color:rgb(42, 42, 42)">&mdash;no matter that I was going to a liberal university, for instance&mdash;I shouldn&rsquo;t have expected it to be rational for me to become any more liberal than I was then.</span><br /><br />Moreover, Fact 1 also implies that if me and Becca shared opinions in 2010, then we couldn't&nbsp;have expected rational forces to lead me to become more liberal than her.&nbsp;<br /><br /><span style="color:rgb(42, 42, 42)">Why is Fact 1 true&mdash;and what does it mean?</span><br /><br /><strong style="color:rgb(42, 42, 42)">Why it&rsquo;s true:</strong><span style="color:rgb(42, 42, 42)">&nbsp;Return to the simple lottery case. Suppose you are only allowed to ask questions which you know I&rsquo;ll give a clear answer to.</span><span style="color:rgb(42, 42, 42)">&nbsp;&nbsp;</span><span style="color:rgb(42, 42, 42)">You&rsquo;re currently 30% confident that you won. Is there anything you can ask me that you expect will make your more confident of this?</span><span style="color:rgb(42, 42, 42)">&nbsp;&nbsp;</span><span style="color:rgb(42, 42, 42)">No.</span><br /><br /><span style="color:rgb(42, 42, 42)">You could ask me, &ldquo;Did I win?&rdquo;&mdash;but although there&rsquo;s a 30% chance I&rsquo;ll say &lsquo;Yes&rsquo;, and your confidence will jump to 100%, there&rsquo;s a 70% chance I&rsquo;ll say &lsquo;No&rsquo; and it&rsquo;ll drop to 0%.</span><span style="color:rgb(42, 42, 42)">&nbsp;&nbsp;</span><span style="color:rgb(42, 42, 42)">Notice that (0.3)(1) + (0.7)(0) = 30%.</span><br /><br /><span style="color:rgb(42, 42, 42)">You could instead ask me something that&rsquo;s more likely to give you confirming evidence, such as &ldquo;Did Beth or I win?&rdquo;</span><span style="color:rgb(42, 42, 42)">&nbsp;&nbsp;</span><span style="color:rgb(42, 42, 42)">In that case it&rsquo;s 50% likely that I&rsquo;ll say &lsquo;Yes&rsquo;&mdash;but if I do your confidence will only jump to 60% (since there&rsquo;ll still be a 40% chance that Beth won); and if I say &lsquo;No&rsquo;, your confidence will drop to 0%. And again, (0.5)(0.6) + (0.5)(0) = 30%.</span><br /><br /><span style="color:rgb(42, 42, 42)">This is no coincidence. Fact 1 implies that if you can only ask questions with unambiguous answers, there&rsquo;s&nbsp;</span><em style="color:rgb(42, 42, 42)">no</em><span style="color:rgb(42, 42, 42)">&nbsp;question you can ask that you can expect to make you more confident that you won. And recall: unambiguous Bayesianism assimilates&nbsp;<em>every</em>&nbsp;scenario to one like this.</span><br /><br /><strong style="color:rgb(42, 42, 42)">What it means:</strong><span style="color:rgb(42, 42, 42)">&nbsp;Fact 1 implies that if unambiguous Bayesianism is the right theory of epistemic rationality, then the polarization we observe in politics must be irrational.</span><span style="color:rgb(42, 42, 42)">&nbsp;&nbsp;</span><br /><br /><span style="color:rgb(42, 42, 42)">After all, a <a href="https://www.kevindorst.com/stranger_apologies/rp" target="_blank">core feature of this polarization</a> is that&nbsp;</span><em style="color:rgb(42, 42, 42)">it is possible to see it coming</em><span style="color:rgb(42, 42, 42)">. When my friend Becca and I went our separate ways in 2010, I expected that her opinions would get more conservative, and mine would get more liberal.</span><span style="color:rgb(42, 42, 42)">&nbsp;</span><span style="color:rgb(42, 42, 42)">Unambiguous Bayesianism implies, therefore, that I must chalk such predictably polarization up to irrationality.</span><br /><br /><span style="color:rgb(42, 42, 42)">But, <a href="https://www.kevindorst.com/stranger_apologies/rp" target="_blank">as I&rsquo;ve argued</a>, there&rsquo;s strong reason to think I&nbsp;</span><em style="color:rgb(42, 42, 42)">can&rsquo;t</em><span style="color:rgb(42, 42, 42)">&nbsp;chalk it up to irrationality&mdash;for if I&rsquo;m to hold onto my political beliefs now, I can&rsquo;t think they were formed irrationally.</span><br /><br /><span style="color:rgb(42, 42, 42)">This&mdash;now stated more precisely&mdash;is the&nbsp;</span><strong style="color:rgb(42, 42, 42)">puzzle of predictable polarization</strong><span style="color:rgb(42, 42, 42)">&nbsp;with which I began this series.</span></div>  <div class="paragraph" style="text-align:right;"><font color="#a1a1a1">(1300 words left)</font></div>  <h2 class="wsite-content-title"><strong><u>Ambiguous Evidence</u></strong></h2>  <div class="paragraph"><span style="color:rgb(42, 42, 42)">The solution is ambiguous evidence.</span><br /><br /><span style="color:rgb(42, 42, 42)">Evidence is ambiguous when it doesn&rsquo;t wear its verdicts on its sleeve&mdash;when even rational people should be unsure how to react to it. Precisely: your evidence is&nbsp;</span><strong style="color:rgb(42, 42, 42)">ambiguous</strong><span style="color:rgb(42, 42, 42)">&nbsp;if, in light of it, you should be unsure how confident to be in some claim.</span><span style="color:rgb(42, 42, 42)">&nbsp;&nbsp;</span><br /><br /><span style="color:rgb(42, 42, 42)">(More precisely: letting P be the rational probabilities to have given your evidence, there is some claim q such that P(P(q)=t)&lt;1, for all t. See the <a href="https://www.kevindorst.com/reasonably-polarized-technical-appendix.html" target="_blank">Technical Appendix</a>.)</span><br /><br /><span style="color:rgb(42, 42, 42)">Here is the key result driving this project:</span></div>  <blockquote><font size="4"><strong style="color:rgb(42, 42, 42)">Fact 2.</strong><span style="color:rgb(42, 42, 42)">&nbsp;Whenever evidence is ambiguous, there is a claim on which it can be predictably polarizing.</span></font></blockquote>  <div class="paragraph"><span style="color:rgb(42, 42, 42)">In other words, someone who receives ambiguous evidence <em>can</em> expect it to be rational to increase their confidence in some claim. Therefore, if two people will receive ambiguous evidence, it's possible for them to expect that that their beliefs will diverge in a particular direction.</span><br /><br /><span style="color:rgb(42, 42, 42)"><a href="https://www.kevindorst.com/stranger_apologies/how-to-polarize-rational-people" target="_blank">As we saw in our experiment</a>&mdash;and as I&rsquo;ll explain in more depth next week&mdash;this means that ambiguous evidence can lead to predictable, rational shifts in your beliefs.</span><br /><br /><span style="color:rgb(42, 42, 42)">Without going into the formal argument, this is something that I think we all grasp, intuitively.</span><span style="color:rgb(42, 42, 42)">&nbsp;</span><span style="color:rgb(42, 42, 42)">Consider an activity like asking a friend for encouragement. For example, suppose that instead of a lottery, you were competing with Charlie and Beth for a job. I don&rsquo;t know who will get the offer, but you&rsquo;re nervous and come to me seeking reassurance. What will I do?</span><br /><br /><span style="color:rgb(42, 42, 42)">I&rsquo;ll provide you reasons to think you&nbsp;</span><em style="color:rgb(42, 42, 42)">will</em><span style="color:rgb(42, 42, 42)">&nbsp;get it&mdash;help you focus on how your interview went well, how qualified you are, etc.</span><br /><br /><span style="color:rgb(42, 42, 42)">Of course, when you go to me seeking reassurance you&nbsp;</span><em style="color:rgb(42, 42, 42)">know</em><span style="color:rgb(42, 42, 42)">&nbsp;that I&rsquo;m going to encourage you in this way. So the mere fact that I&rsquo;m giving you such reasons isn&rsquo;t, in itself, evidence that you got the job.</span><span style="color:rgb(42, 42, 42)">&nbsp;&nbsp;</span><span style="color:rgb(42, 42, 42)">Nevertheless, we go to our friends for encouragement in this way because we&nbsp;</span><em style="color:rgb(42, 42, 42)">do</em><span style="color:rgb(42, 42, 42)">&nbsp;tend to feel more confident afterwards. Why is that?</span><br /><br /><span style="color:rgb(42, 42, 42)">If I&rsquo;m a good encourager, then I&rsquo;ll do my best to make the evidence in favor of you getting the position clear and unambiguous, while that against you getting it unclear and ambiguous. I&rsquo;ll say, &ldquo;They were really excited about you in the interview, right?&rdquo;&mdash;highlighting unambiguous evidence that you&rsquo;ll get the job.</span><span style="color:rgb(42, 42, 42)">&nbsp;&nbsp;</span><span style="color:rgb(42, 42, 42)">And when you worry, &ldquo;But one of the interviewers looked unhappy throughout it&rdquo;, I&rsquo;ll say, &ldquo;Bill? I hear he&rsquo;s always grumpy, so it&rsquo;s probably got nothing to do with you&rdquo;&mdash;thus making evidence that you didn&rsquo;t get the job more ambiguous and so weaker.</span><span style="color:rgb(42, 42, 42)">&nbsp;&nbsp;</span><span style="color:rgb(42, 42, 42)">On the whole, this back-and-forth can be expected to make you more confident that you&rsquo;ll get the job.</span><br /><br /><span style="color:rgb(42, 42, 42)">This informal account is sketchy, but I hope you can see the rough outlines of how this story will go.</span><span style="color:rgb(42, 42, 42)">&nbsp;&nbsp;</span><span style="color:rgb(42, 42, 42)">We&rsquo;ll return to filling it out in the details in due course.</span><br /><br /><span style="color:rgb(42, 42, 42)">But before we do that, there&rsquo;s a more fundamental question we need to ask.</span><br /><br /><span style="color:rgb(42, 42, 42)">I&rsquo;ve introduced the notion of ambiguous evidence and proved a result connecting it to polarization. But how do we know that the models of ambiguous evidence which allow for predictable polarization are good models of (epistemic)&nbsp;</span><em style="color:rgb(42, 42, 42)">rationality</em><span style="color:rgb(42, 42, 42)">? Unambiguous Bayesianism has a distinguished pedigree as a model of rational belief; how do we know that allowing ambiguous evidence isn&rsquo;t just a way of distorting it into an&nbsp;</span><em style="color:rgb(42, 42, 42)">ir</em><span style="color:rgb(42, 42, 42)">rational model?</span></div>  <div class="paragraph" style="text-align:right;"><font color="#a1a1a1">(700 words left)</font></div>  <h2 class="wsite-content-title"><u><strong>The Value of Rationality</strong></u></h2>  <div class="paragraph"><span style="color:rgb(42, 42, 42)">This can be given a precise answer in terms of the&nbsp;</span><em style="color:rgb(42, 42, 42)"><a href="http://users.ox.ac.uk/~shug2406/ValueOfEvidence.pdf" target="_blank">value of evidence</a></em><span style="color:rgb(42, 42, 42)">.</span><br /><br /><strong><span style="color:rgb(42, 42, 42)">What distinguishes rational from&nbsp;</span><em style="color:rgb(42, 42, 42)">ir</em></strong><span style="color:rgb(42, 42, 42)"><strong>rational transitions in belief?</strong> It&rsquo;s rational to update your beliefs about the lottery by&nbsp;</span><em style="color:rgb(42, 42, 42)">asking me a question</em><span style="color:rgb(42, 42, 42)">&nbsp;about who won.</span><span style="color:rgb(42, 42, 42)">&nbsp;&nbsp;</span><span style="color:rgb(42, 42, 42)">It&rsquo;s irrational to update your beliefs&nbsp;</span><em style="color:rgb(42, 42, 42)">hypnotizing yourself</em><span style="color:rgb(42, 42, 42)">&nbsp;to believe you won. Why the difference?</span><br /><br /><span style="color:rgb(42, 42, 42)">Answer: asking me a question is&nbsp;</span><em style="color:rgb(42, 42, 42)">valuable</em>, in the sense that<span style="color:rgb(42, 42, 42)">&nbsp;you can expect it to make your beliefs more accurate, and therefore to improve the quality of your decisions. Conversely, hypnotizing yourself is&nbsp;</span><em style="color:rgb(42, 42, 42)">not</em><span style="color:rgb(42, 42, 42)">&nbsp;valuable in this sense: if right now you&rsquo;re 30% confident you won, you don&rsquo;t expect that hypnotizing yourself to become 100% confident will make your opinions more accurate&mdash;rather, it&rsquo;ll just make you certain of something that&rsquo;s likely false!</span><br /><br /><span style="color:rgb(42, 42, 42)">This idea can be made formally precise using tools from decision theory. Say that a transition in beliefs is&nbsp;</span><strong style="color:rgb(42, 42, 42)">valuable</strong><span style="color:rgb(42, 42, 42)">&nbsp;if, no matter what decision you face, you prefer to make the transition before making your decision, rather than simply making your decision now.</span><span style="color:rgb(42, 42, 42)">&nbsp;&nbsp;</span><span style="color:rgb(42, 42, 42)">(See the <a href="https://www.kevindorst.com/reasonably-polarized-technical-appendix.html" target="_blank">Technical Appendix</a> for details.)</span><br /><br /><span style="color:rgb(42, 42, 42)">To illustrate, focus on our simple lottery case. Suppose you&rsquo;re offered the following:</span></div>  <blockquote><strong style="color:rgb(42, 42, 42)">Bet:</strong><span style="color:rgb(42, 42, 42)">&nbsp;If Charlie wins the lottery, you gain 5 dollars; if not, you lose 1 dollar.</span></blockquote>  <div class="paragraph"><span style="color:rgb(42, 42, 42)">Since Charlie is 50% likely to win, this is a bet in your favor.</span><br /><br /><span style="color:rgb(42, 42, 42)">Would you rather (1) decide whether to take the Bet now, or (2) first ask me a question about who won, and&nbsp;</span><em style="color:rgb(42, 42, 42)">then</em><span style="color:rgb(42, 42, 42)">&nbsp;decide whether to take the Bet?</span><span style="color:rgb(42, 42, 42)">&nbsp;&nbsp;</span><br /><br /><span style="color:rgb(42, 42, 42)">Obviously the latter. If you must decide now, you&rsquo;ll take the Bet&mdash;with some chance of gaining 5 dollars, and some chance of losing 1 dollar. But what if instead you first ask, &ldquo;Did Charlie win?&rdquo;, before making your decision? Then if I say &lsquo;Yes&rsquo;, you&rsquo;ll take the Bet and walk away with 5 dollars; and if I say &lsquo;No&rsquo;, you leave the Bet and avoid losing 1 dollar.<br /><br /><strong>In short:</strong> <em>asking a question allows you to keep the benefit and reduce the risk.</em>&nbsp;That's why asking questions is epistemically rational.</span><br /><br /><span style="color:rgb(42, 42, 42)">Contrast questions with hypnosis. Would you rather (1) decide whether to take the Bet now, or (2) first hypnotize yourself to believe that you won, and&nbsp;</span><em style="color:rgb(42, 42, 42)">then</em><span style="color:rgb(42, 42, 42)">&nbsp;decide whether to take the Bet?</span><span style="color:rgb(42, 42, 42)">&nbsp;&nbsp;</span><br /><br /><span style="color:rgb(42, 42, 42)">Obviously you&rsquo;d rather&nbsp;</span><em style="color:rgb(42, 42, 42)">not</em><span style="color:rgb(42, 42, 42)">&nbsp;hypnotize yourself. After all, if you don&rsquo;t hypnotize yourself, you&rsquo;ll take the Bet&mdash;and it&rsquo;s a bet in your favor.</span><span style="color:rgb(42, 42, 42)">&nbsp;&nbsp;</span><span style="color:rgb(42, 42, 42)">If you&nbsp;</span><em style="color:rgb(42, 42, 42)">do</em><span style="color:rgb(42, 42, 42)">&nbsp;hypnotize yourself, the Bet will still be in your favor (it&rsquo;s still 50% likely that Charlie will win); but, since you&rsquo;ve hypnotized yourself to think that&nbsp;</span><em style="color:rgb(42, 42, 42)">you&nbsp;</em><span style="color:rgb(42, 42, 42)">won (so Charlie lost), you won&rsquo;t take the bet&mdash;losing out on a good opportunity.</span><br /><br /><span style="color:rgb(42, 42, 42)">All of this can be generalized and formalized into a theory of epistemic rationality:</span></div>  <blockquote><font size="4"><strong style="color:rgb(42, 42, 42)">Rationality as Value:&nbsp;</strong><span style="color:rgb(42, 42, 42)">epistemically rational belief-transitions are those that are&nbsp;</span><em style="color:rgb(42, 42, 42)">valuable,</em><span style="color:rgb(42, 42, 42)">&nbsp;in the sense that you should always expect them to lead to better decisions.</span></font></blockquote>  <div class="paragraph"><span style="color:rgb(42, 42, 42)">There are two core facts this theory gives us.</span><br /><br /><span style="color:rgb(42, 42, 42)">First:&nbsp;</span><em style="color:rgb(42, 42, 42)">if</em><span style="color:rgb(42, 42, 42)">&nbsp;</span><em style="color:rgb(42, 42, 42)">we assume</em><span style="color:rgb(42, 42, 42)">&nbsp;that evidence is unambiguous, then Rationality as Value implies unambiguous Bayesianism:</span></div>  <blockquote><font size="4"><strong style="color:rgb(42, 42, 42)">Fact 3.</strong><span style="color:rgb(42, 42, 42)">&nbsp;Rationality as Value implies that, <em>when evidence is unambiguous</em>, unambiguous Bayesianism is the right theory of epistemic rationality.</span></font></blockquote>  <div class="paragraph"><span style="color:rgb(42, 42, 42)">Thus our theory of epistemic rationality subsumes the standard theory as a special case. In particular, it implies that when evidence is unambiguous, predictable polarization is irrational.</span><br /><br /><span style="color:rgb(42, 42, 42)">Second: once we allow ambiguous evidence, predictable polarization&nbsp;</span><em style="color:rgb(42, 42, 42)">can</em><span style="color:rgb(42, 42, 42)">&nbsp;be rational:</span></div>  <blockquote><font size="4"><strong style="color:rgb(42, 42, 42)">Fact 4.</strong><span style="color:rgb(42, 42, 42)">&nbsp;There are belief-transitions that are valuable but contain ambiguous evidence&mdash;and which, therefore, are predictably polarizing.</span></font><br /></blockquote>  <div class="paragraph"><span style="color:rgb(42, 42, 42)">Fact 4 is the foundation for the theory of rational polarization that I&rsquo;m putting forward. It provides a theoretical &ldquo;possibility proof&rdquo;, to complement <a href="https://www.kevindorst.com/stranger_apologies/how-to-polarize-rational-people" target="_blank">our empirical one</a>: it shows that, when evidence is ambiguous, you can be rational expect it to lead you to the truth,&nbsp;</span><em style="color:rgb(42, 42, 42)">despite</em><span style="color:rgb(42, 42, 42)">&nbsp;expecting it to polarize you.</span><span style="color:rgb(42, 42, 42)">&nbsp;&nbsp;</span><span style="color:rgb(42, 42, 42)">This is how we are going to solve the puzzle of predictable polarization.</span><br /><br /><span style="color:rgb(42, 42, 42)">In particular, it turns out that we can string together a series of independent questions and pieces of (ambiguous) evidence with the following features:</span><ul style="color:rgb(0, 0, 0)"><li>Relative to each question, the evidence you&rsquo;ll receive is valuable and yet (slightly) predictably polarizing;</li><li>Yet relative to the collection of questions as a whole, the evidence is predictably and&nbsp;<em>profoundly</em>&nbsp;polarizing.</li></ul><br /><span style="color:rgb(42, 42, 42)">This, I&rsquo;ll argue, is how predictable, persistent, and profound polarization can be rational&mdash;how, back in 2010, Becca and I could predict that we'd come to disagree radically without predicting that either of us would be systematically irrational.&nbsp;&nbsp;</span><br /><br /><span style="color:rgb(42, 42, 42)">What next?<br /><strong>The formal details</strong>&nbsp;and proofs of Facts 1 &ndash; 4 can be found in the <a href="https://www.kevindorst.com/reasonably-polarized-technical-appendix.html" target="_blank">Technical Appendix</a>.</span><br /><strong style="color:rgb(42, 42, 42)">&#8203;If you liked this post,</strong><span style="color:rgb(42, 42, 42)">&nbsp;consider&nbsp;</span><a href="https://mailchi.mp/279517050568/stranger_apologies_signup">signing up</a><span style="color:rgb(42, 42, 42)">&nbsp;for the newsletter,&nbsp;</span><a href="https://twitter.com/kevin_dorst" target="_blank">following me on Twitter</a><span style="color:rgb(42, 42, 42)">, or&nbsp;</span><a href="https://twitter.com/kevin_dorst/status/1302251551587790848">spreading the word</a>.<br /><strong style="color:rgb(42, 42, 42)"><a href="https://www.kevindorst.com/stranger_apologies/rational-polarization-can-be-profound-persistent-and-predictable" target="_blank">Next Post:</a></strong><span style="color:rgb(42, 42, 42)">&nbsp;an argument that the predictable polarization observed in our&nbsp;</span><a href="https://www.kevindorst.com/stranger_apologies/how-to-polarize-rational-people" target="_blank">word-completion experiment</a><span style="color:rgb(42, 42, 42)">&nbsp;was rational, and an explanation of how predictable, profound, persistent polarization can arise rationally from ambiguous evidence.</span><br /><span style="color:rgb(42, 42, 42)">&#8203;&#8203;&#8203;</span></div>]]></content:encoded></item><item><title><![CDATA[How we Polarized]]></title><link><![CDATA[https://www.kevindorst.com/stranger_apologies/how-we-polarized]]></link><comments><![CDATA[https://www.kevindorst.com/stranger_apologies/how-we-polarized#comments]]></comments><pubDate>Sat, 19 Sep 2020 12:58:54 GMT</pubDate><category><![CDATA[Polarization]]></category><category><![CDATA[Reasonably Polarized series]]></category><guid isPermaLink="false">https://www.kevindorst.com/stranger_apologies/how-we-polarized</guid><description><![CDATA[(2200 words; 10 min read)When Becca and I left our home town in central Missouri 10 years ago, I made my way to a liberal university in the city, while she made her way to a conservative college in the country.As I&rsquo;ve said before, part of what&rsquo;s fascinating about stories like ours is that we could&nbsp;predict&nbsp;that we&rsquo;d become polarized as a result: that I&rsquo;d become more liberal; she, more conservative.But what, exactly, does that mean?&nbsp;&nbsp;In what sense have A [...] ]]></description><content:encoded><![CDATA[<div class="paragraph"><font color="#818181">(2200 words; 10 min read)</font><br /><br /><span style="color:rgb(42, 42, 42)">When Becca and I left our home town in central Missouri 10 years ago, I made my way to a liberal university in the city, while she made her way to a conservative college in the country.</span><br /><br /><a href="https://www.kevindorst.com/stranger_apologies/rp">As I&rsquo;ve said before</a><span style="color:rgb(42, 42, 42)">, part of what&rsquo;s fascinating about stories like ours is that we could&nbsp;</span><em style="color:rgb(42, 42, 42)">predict</em><span style="color:rgb(42, 42, 42)">&nbsp;that we&rsquo;d become polarized as a result: that I&rsquo;d become more liberal; she, more conservative.</span><br /><br /><span style="color:rgb(42, 42, 42)">But what, exactly, does that mean?</span><span style="color:rgb(42, 42, 42)">&nbsp;&nbsp;</span><span style="color:rgb(42, 42, 42)">In what sense have Americans polarized&mdash;and in what sense is the&nbsp;</span><em style="color:rgb(42, 42, 42)">predictability</em><span style="color:rgb(42, 42, 42)">&nbsp;of this polarization new?</span><br /><br /><span style="color:rgb(42, 42, 42)">That&rsquo;s a huge question. Here&rsquo;s the short of it.</span></div>  <div>  <!--BLOG_SUMMARY_END--></div>  <div class="paragraph">Polarization has always been with us. A set of basic psychological and sociological mechanisms explain why human societies have always been characterized by <a href="https://deepblue.lib.umich.edu/handle/2027.42/67489"><strong>local conformity and global diversity</strong></a>: there tends to be <a href="http://snap.stanford.edu/class/cs224w-readings/bikhchandani92fads.pdf">agreement within small social circles</a>, but <a href="https://arxiv.org/pdf/0808.2710.pdf">disagreement between them</a>. As a result, when people go off on different life trajectories, it&rsquo;s always been normal for their attitudes to drift apart.<br /><br />What&rsquo;s <em>changed</em> is that a series of factors have come together to <em>align</em> these various mechanisms&mdash;and kick them into overdrive. As a result, now when people like me and Becca go off on different life trajectories, their opinions diverge in <em>predictable</em> and <em>consistent</em> directions, and do so faster and farther than before.<br /><br />In sum: normally, the polarization process is a random walk; recently, it&rsquo;s been transformed into a feedback loop.<br /><br />Of course, there are many moving parts in a full story of modern polarization.<span>&nbsp; </span>My goal is not to challenge the standard empirical accounts of it; instead, it&rsquo;s to challenge their normative interpretations&mdash;to argue that our polarized politics arises from reasonable people who care about the truth but face ambiguous evidence. Given that, it&rsquo;ll suffice to have a simple empirical story on the table; subtle or contested details will be reserved for the <a href="https://www.kevindorst.com/reasonably-polarized-technical-appendix.html" target="_blank">Technical Appendix</a>.<br /><br />As a representative example, focus on me and Becca. There are three questions we want to answer: In what sense have we &ldquo;polarized&rdquo;? What mechanisms lead to such polarization? And what has <em>changed</em>, making our current polarization different from past polarizations?<br /><br /><br /><u><span><strong>In what sense have we &ldquo;polarized&rdquo;?</strong></span></u><br />There are three distinct &ldquo;polarizations&rdquo; that Becca and I&mdash;and the United States in general&mdash;have gone through in recent decades.<br /><br />The first is <a href="https://www.hoover.org/sites/default/files/research/docs/fiorina_3_finalfile.pdf" target="_blank"><strong>ideological sorting</strong></a>: my views have become more consistently liberal and aligned with those of the Democratic Party; hers have become more consistently conservative and aligned with those of the Republican Party.<br /><br />For example: in 2010 I was pro-choice and Becca was pro-life, but we were both un-opinionated about gun rights. (In high school, I had the experience&mdash;which at the time I thought was cool&mdash;of firing a friend&rsquo;s AK-47 at a shooting range. Non-US readers: yes, you can buy those legally.) Yet in the decade since, my views have become more consistently left-leaning&mdash;I am now both pro-choice <em>and</em> anti-gun. Meanwhile, Becca&rsquo;s views have become more consistently conservative&mdash;she&rsquo;s now both pro-life and pro-gun.<br /><br />The second is <a href="https://www.annualreviews.org/doi/abs/10.1146/annurev-polisci-051117-073034"><strong>affective polarization</strong></a><strong>:</strong> our views of the opposing party have become increasingly negative.<br /><br />For example: in 2010, most of my friends were conservative, and as a result I had quite a bit of respect for the Republican Party.<span>&nbsp; </span>Today, I have to wrack my brains to think of people I know who might&rsquo;ve voted for Trump; and&mdash;I must admit&mdash;I&rsquo;ve come to dislike the Republicans more, and understand where they&rsquo;re coming from less.<span>&nbsp; </span>A similar story, no doubt, governs Becca&rsquo;s opinions toward Democrats.<br /><br />The third is <a href="https://en.wikipedia.org/wiki/Group_polarization#Attitude_polarization"><strong>attitude polarization</strong></a>: our disagreements over political questions have become much sharper.<br /><br />For example: if in 2010, you&rsquo;d asked us both whether it&rsquo;d be good for the country for a Republican to be elected president in 2020, Becca would&rsquo;ve been mildly inclined to agree and I&rsquo;d have been mildly inclined to disagree.<span>&nbsp; </span>What if you ask us <em>today</em>? Then we&rsquo;ll both admit that this election feels like a matter of life or death for our country.<span>&nbsp; </span>Becca thinks, &ldquo;If Biden wins, the police will be abolished and we&rsquo;ll be cast into socialism!&rdquo;<span>&nbsp; </span>I think, &ldquo;If Trump wins, the norms that uphold our democracy will be under direct assault!&rdquo;&nbsp;<br /><br />(If your skeptical that American's political opinions have undergone attitude polarization--<a href="https://www.hoover.org/sites/default/files/research/docs/fiorina_3_finalfile.pdf" target="_blank">as some are</a>&mdash;see the <a href="https://www.kevindorst.com/reasonably-polarized-technical-appendix.html" target="_blank">technical appendix</a>.)<br /><br />Upshot: Becca and I &ldquo;polarized&rdquo; in the senses that (1) our attitudes became more consistently opposed, (2) our feelings toward the other side became more negative, and (3) our disagreements became increasingly sharp.<br /><br />This process is familiar. The United States has become increasingly ideologically <em>sorted</em> by politics&mdash;for example, the proportion of people with consistently liberal or<span>&nbsp; </span>consistently conservative positions <a href="https://www.pewresearch.org/politics/2014/06/12/political-polarization-in-the-american-public/">more than doubled between 1994 and 2014</a>. Negative feelings toward the other side have skyrocketed&mdash;for example, those with a &ldquo;very unfavorable&rdquo; opinion of the opposing party <a href="https://www.pewresearch.org/politics/2016/06/22/1-feelings-about-partisans-and-the-parties/">nearly tripled between 1994 and 2016.</a> And disagreements on concrete political questions has become increasingly sharp&ndash;&ndash;for example, the <a href="https://www.pewresearch.org/fact-tank/2016/01/12/presidential-job-approval-ratings-from-ike-to-obama/">Republican/Democrat presidential approval ratings</a> averaged 75% / 34% in the time of Nixon, 81% / 23% in the time of (W.) Bush, and currently sit at <a href="https://news.gallup.com/poll/203198/presidential-approval-ratings-donald-trump.aspx">92% / 4% in the time of Trump</a>.<br /><br />So Americans have always been polarized, but that polarization has kicked into overdrive in recent decades.<span>&nbsp; </span>Given that, we need to know two things: <em>Why</em>, in general, do societies polarize? And what has changed to make this more severe in recent decades?<br /><br /><br /><u><span><strong>Why do societies polarize?</strong></span></u><br />Psychologists and sociologists have long known of a set of mechanisms that drive people to have stronger (and often more conflicting) attitudes over time&mdash;especially when they are put in different social and informational environments.<br /><br />First mechanism: most obviously, and most simply, <em>people are </em><a href="https://psycnet.apa.org/record/1998-07091-008"><strong><em>persuaded by arguments</em></strong></a>. That means that if two people enter different environments in which they&rsquo;ll tend to encounter arguments for different positions, their opinions will predictable diverge.<br /><br />For example, since I was headed to a liberal university, I could expect to hear arguments in favor of progressive taxation and the existence of oppressive sexism; and since Becca was headed to a conservative college, she could expect to hear arguments in favor of the value of capitalism and the importance of being a good woman.<span>&nbsp; </span>As a result of these opposing arguments, it&rsquo;s in some sense obvious that we could predict that we&rsquo;d come to disagree. (But that apparent obviousness is also a bit misleading&mdash;as we&rsquo;ll see, the <em>rationality</em> of this predictable disagreement hinges on the subtleties of ambiguous evidence.)<br /><br />Second mechanism: when groups of like-minded individuals share and discuss their opinions, they tend to become both <a href="https://global.oup.com/academic/product/going-to-extremes-9780195378016?cc=gb&amp;lang=en&amp;"><em>more homogenous</em> and <em>more extreme</em> in those opinions</a>.<span>&nbsp; </span>This is known as the <a href="https://psycnet.apa.org/record/1986-24477-001"><strong>group polarization effect</strong></a>, and is one of the most robust findings in social psychology.<br /><br />For example, when I started taking about gun rights in groups of mostly-liberal university students (the majority of whom had never held a gun), we all grew more confident that interpretations of the second amendment have been bastardized, leading to the rise in mass shootings.<span>&nbsp; </span>Meanwhile, when Becca did the same in groups of mostly-conservative friends (many of whom <em>owned</em> guns), she grew more confident that the right to bear arms was central to American identity, and that since we&rsquo;ve always had guns, the rise in mass shootings has other causes.<br /><br />These first two mechanisms&mdash;persuasion and group polarization&mdash;are what gets polarization started: simply by putting me and Becca in different (liberal vs. conservative) social environments, they started to pull our opinions apart.<br /><br />Now jump to Spring 2011, when our opinions had already been pulled slightly further apart by our new environments. Once this happened, a new set of mechanisms kicked in: <em>our opposing beliefs started to ratchet up on their own</em>.<br /><br />Much of this phenomenon goes under the rather disunited label of ``<a href="http://wrap.warwick.ac.uk/95233/1/WRAP_Theses_Whittlestone_2017.pdf"><strong>confirmation bias</strong></a>&rdquo;: people&rsquo;s tendency to gather and interpret evidence in a way that confirms their prior or favored beliefs. This tendency can be helpfully divided into three different mechanisms.<br /><br />Third mechanism: <a href="https://www.sciencedirect.com/science/article/pii/S0065260108602129"><strong>selective exposure</strong></a>&mdash;when given a choice, people tend to prefer to see new information that they expect to confirm their prior beliefs, rather than information they expect to disconfirm them.<br /><br />For example, in Spring 2011 I began to consistently check the New York Times and the Washington Post to get my news. Becca, meanwhile, became more consistent in checking Fox News and the National Review.<br /><br />Fourth mechanism: <a href="https://psycnet.apa.org/record/1981-05421-001"><strong>biased assimilation</strong> of evidence</a>&mdash;when confronted with conflicting or messy evidence, <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1540-5907.2006.00214.x">people tend to interpret it in a way that favors their prior beliefs</a>.<br /><br />For example, in 2012 when <a href="https://www.irishtimes.com/business/economy/employment/obama-era-fiscal-policy-defined-by-republican-stonewalling-1.528089">Republicans stonewalled Obama&rsquo;s proposals to use government funds to boost the economy</a>, I took this to show that Republicans tend to put party over country&mdash;but Becca took it to show that Democrats tend to resort to government over-reach.<br /><br />Final mechanism: <a href="https://pubmed.ncbi.nlm.nih.gov/2270237/"><strong>motivated reasoning</strong></a>, a.k.a. <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3046603"><strong>identity-protective cognition</strong></a>&mdash;people tend to gather and make use of evidence in a way that confirms the things that they want to believe, especially when the belief is tied to their identity.<br /><br />For example, when Tara Reade accused Biden of sexual assault, I must admit that I was inclined to be a bit skeptical&mdash;to spend a bit longer looking at articles that questioned her credibility than at those that supported it. Yet when Blaise Ford made an accusation against Brett Kavanaugh, I made no such skeptical effort.<span>&nbsp; </span>Becca, no doubt, reacted in exactly the opposite way.<br /><br />These mechanisms should sound familiar&mdash;we all know that people tend to be persuaded by arguments, that their motivations tend to affect the way they reason, and so on.<br /><br />This is as it should be. Polarization is a familiar fact of life&mdash;it&rsquo;s <em>always</em> been with us. Thus any adequate explanations of it will be built upon other familiar facts of life; an explanation built solely upon new or surprising findings would be missing the bigger picture.<br /><br />Nevertheless, there <em>is</em> something new in our polarized politics&mdash;and there <em>is</em> something surprising lying behind these familiar mechanisms.<br /><br />What&rsquo;s new is that these mechanisms have become collectively aligned and individually kicked into overdrive.<br /><br />And what&rsquo;s surprising is that <em>all of them</em>&mdash;including howlers like biased assimilation and motivated reasoning&mdash;are to be expected from rational people who care about the truth but face systematically ambiguous evidence.<br /><br />Explaining this latter point will take weeks; today, let&rsquo;s close with the former.<br /><br /><br /><u><span><strong>What has <em>changed?</em></strong></span></u><br />What has led to recent the <em>increases</em> in American polarization? The story, in outline, is that a variety of societal changes have led to increased <em>social</em> and <em>informational</em> <em>sorting</em>.<br /><br />Both the <a href="https://en.wikipedia.org/wiki/Southern_strategy">southern realignment</a> and the civil rights movement <a href="https://www.amazon.co.uk/Why-Were-Polarized-Ezra-Klein/dp/147670032X">started the process</a> of making Democrats the party of consistent progressives and the Republicans the party of consistent conservatives. In turn, this increase in ideological consistency may have combined with the <a href="https://www.pewforum.org/2019/10/17/in-u-s-decline-of-christianity-continues-at-rapid-pace/">fading influence of religion</a> to make <a href="https://academic.oup.com/poq/article/80/S1/351/2223236"><em>political party</em> the new key to many people&rsquo;s identity</a>.<br /><br />Meanwhile, an <a href="http://www.thebigsort.com/home.php">increasing urban-rural divide</a> has made it so that the political views of one&rsquo;s (future) friends has become more predictable than ever. Combined with a <a href="https://en.wikipedia.org/wiki/Bowling_Alone">precipitous fall in civic engagement</a>, this has led to a decrease in cross-party social pollination and <a href="https://www.pewresearch.org/politics/2014/06/12/section-3-political-polarization-and-personal-life/">fewer friendships across party lines</a>.<br /><br />At the same time, an <a href="https://www.journalism.org/2014/10/21/political-polarization-media-habits/">increasingly fragmented and political media landscape</a>, along with the <a href="https://press.princeton.edu/books/hardcover/9780691175515/republic">rise of web personalization</a> has allowed people greater freedom in choosing their sources of information and opinion.<br /><br />In short, decades ago, our social circles were ideologically diverse, our news sources constantly confronted us with differing opinions, and our political beliefs were not terribly consistent or central to our identities. Today, all that has changed. As a result, the effects of persuasion, group polarization, selective exposure, biased assimilation, and motivated reasoning all point in increasingly the same direction&ndash;&ndash;pulling Democrats to the left, and Republicans to the right.<br /><br />In my case: the more time I spent talking with Democrats, the more persuaded I was of their arguments and the more opinionated we all became. The more informational choices I had, the more inclined I was to listen to and trust liberal sources. The more confident I became, the more inclined I was to interpret ambiguous stories about Republican ideas and politicians in negative ways. The more I came to identify with <em>being</em> a Democrat, the more motivated I was to see new evidence as confirming my Democratic beliefs.<span>&nbsp; </span>And the stronger each of these processes became, the more it reinforced the others.<br /><br />Similarly for Becca&ndash;&ndash;and for Americans everywhere.<br /><br /><strong>Our question:</strong> what does this story&mdash;of old polarizing mechanisms, recently kicked into overdrive&mdash;mean for the rationality of politics?<br /><br />It means that we if understand the old mechanisms, we&rsquo;ll understand both why we&rsquo;ve always been polarized, and the route through which polarization has increased. And it means that if we come to see these old mechanisms as <em>rational</em>, then we&rsquo;ll be able to see our polarized politics&mdash;including our political opponents&mdash;as the result of individuals doing the best they can with the information they have.<br /><br />That&rsquo;s where we&rsquo;re heading. We&rsquo;re going to take an in-depth look at the mechanisms of persuasion, group-polarization, selective exposure, biased assimilation, and motivated reasoning. We&rsquo;ll see how each of these are driven by rational attention to ambiguous evidence&mdash;and that as societal changes have made our political evidence systematically <em>more</em> ambiguous, they have all been kicked into overdrive.<br /><br />But before we can dive into those details, we need to establish some background. What do I mean when I say that these mechanisms are to be expected from &ldquo;rational people who care about the truth&rdquo; and yet face &ldquo;ambiguous&rdquo; evidence?<span>&nbsp; </span>And <em>how could it be</em> that rational people&mdash;like me and Becca&mdash;can be predictably, persistently, and profoundly polarized?<br /><br /><br /><br />What next?<br /><strong>If you have comments or suggestions about this empirical story</strong>, please send them!<br /><strong>&#8203;<a href="https://www.kevindorst.com/stranger_apologies/what-is-rational-polarization" target="_blank">Next post</a></strong><strong><a href="https://www.kevindorst.com/stranger_apologies/what-is-rational-polarization" target="_blank">:</a></strong> a philosophical crash course on what it means to be &ldquo;rational&rdquo;, what exactly &ldquo;ambiguous evidence&rdquo; is, and how it is needed to explain predictable polarization.</div>]]></content:encoded></item><item><title><![CDATA[How to Polarize Rational People]]></title><link><![CDATA[https://www.kevindorst.com/stranger_apologies/how-to-polarize-rational-people]]></link><comments><![CDATA[https://www.kevindorst.com/stranger_apologies/how-to-polarize-rational-people#comments]]></comments><pubDate>Sat, 12 Sep 2020 09:49:31 GMT</pubDate><category><![CDATA[All Most Read]]></category><category><![CDATA[Polarization]]></category><category><![CDATA[Reasonably Polarized series]]></category><guid isPermaLink="false">https://www.kevindorst.com/stranger_apologies/how-to-polarize-rational-people</guid><description><![CDATA[(1700 words; 8 minute read)The core claim of this series is that political polarization is caused by individuals responding rationally to ambiguous evidence.To begin, we need a possibility proof: a demonstration of how ambiguous evidence can drive apart those who are trying to get to the truth. That&rsquo;s what I&rsquo;m going to do today.&#8203;I&rsquo;m going to polarize you, my rational readers.      In my hand I hold a fair coin. I&rsquo;m going to toss it twice (&hellip;done).&nbsp; From t [...] ]]></description><content:encoded><![CDATA[<div class="paragraph"><font color="#818181">(1700 words; 8 minute read)</font><br /><br />The <a href="https://www.kevindorst.com/stranger_apologies/rp">core claim of this series</a> is that political polarization is caused by individuals responding rationally to ambiguous evidence.<br /><br />To begin, we need a possibility proof: a demonstration of how ambiguous evidence can drive apart those who are trying to get to the truth. That&rsquo;s what I&rsquo;m going to do today.<br /><br />&#8203;I&rsquo;m going to polarize you, my rational readers.</div>  <div>  <!--BLOG_SUMMARY_END--></div>  <div class="paragraph" style="text-align:justify;">In my hand I hold a fair coin. I&rsquo;m going to toss it twice (&hellip;done).<span>&nbsp; </span>From those two tosses, I picked one at random; call it the <strong>Random Toss</strong>. How confident are you that the Random Toss landed heads?<span>&nbsp; </span>50-50, no doubt&ndash;&ndash;it&rsquo;s a fair coin, after all.<br /><br />&#8203;But I&rsquo;m going to polarize you on this question. What I&rsquo;ll do is split you into two groups&mdash;the Headsers and the Tailsers&mdash;and give those groups different evidence. What&rsquo;s interesting about this evidence is that we all can predict that<span>&nbsp; </span>it&rsquo;ll lead Headsers to (on average) end up <em>more</em> than 50% confident that the Random Toss landed heads, while Tailsers will end up (on average) <em>less</em> than 50% confident. That is: everyone (yourselves included) can predict that you&rsquo;ll polarize.<br /><br />The trick? I&rsquo;m going to use ambiguous evidence.<br /><br />First, to divide you. If you were born on an <em>even</em> day of the month, you&rsquo;re a <strong>Headser</strong>; if you were born on an <em>odd</em> day, you&rsquo;re a <strong>Tailser</strong>.<span>&nbsp; </span>Welcome to your team.<br /><br />You&rsquo;re going to get different evidence about how the coin-tosses landed. That evidence will come in the form of <strong>word-completion tasks</strong>. In such a task, you&rsquo;re shown a string of letters and some blanks, and asked whether there&rsquo;s an English word that completes the string. For instance, you might see a string like this:<br /><br />P_A_ET<br /><br />And the answer is: <em>yes</em>, there is a word that completes that string. (Hint: what is Venus?) Or you might see a string like this:<br /><br />CO_R_D<br /><br />And the answer is: <em>no</em>, there is no word that completes that string.<br /><br />That&rsquo;s the <em>type</em> of evidence you&rsquo;ll get. You&rsquo;ll be given two different word-completion tasks&mdash;one for each toss of the coin. However, Headsers and Tailser will be given <em>different</em> tasks. Which one they&rsquo;ll see will depend on how the coin landed.<br /><br /><strong><u><span>The rule:</span></u></strong><ul style="color:rgb(0, 0, 0)"><li style="color:rgb(0, 0, 0)"><span style="color:rgb(42, 42, 42)">For each toss of the coin, </span><strong style="color:rgb(42, 42, 42)">Headsers</strong><span style="color:rgb(42, 42, 42)"> will see a </span><u><span style="color:rgb(42, 42, 42)">completable</span></u><span style="color:rgb(42, 42, 42)"> string (like &lsquo;P_A_ET&rsquo;) if the coin landed </span><u><span style="color:rgb(42, 42, 42)">heads</span></u><span style="color:rgb(42, 42, 42)">; they&rsquo;ll see an </span><u><span style="color:rgb(42, 42, 42)">uncompletable</span></u><span style="color:rgb(42, 42, 42)"> string (like &lsquo;CO_R_D&rsquo;) if it landed </span><u><span style="color:rgb(42, 42, 42)">tails</span></u><span style="color:rgb(42, 42, 42)">.</span></li><li>Conversely: for each toss, <strong>Tailsers</strong> will see a <u><span>completable</span></u> string if the coin landed <u><span>tails</span></u>; they&rsquo;ll see an <u><span>uncompletable</span></u> string if it landed <u><span>heads</span></u>.</li></ul><br />Here&rsquo;s your job. Click on the appropriate link below to view a widget which will display the tasks for your group.<span>&nbsp;</span>You&rsquo;ll view the first world-completion task, and then enter how confident you are (between 0&ndash;100%) that the string was completable. Enter &ldquo;100&rdquo; for &ldquo;definitely completable&rdquo;, &ldquo;50&rdquo; for &ldquo;I have no idea&rdquo;, &ldquo;0&rdquo; for &ldquo;definitely <em>not</em> completable&rdquo;, and so on.<br /><br />If you&rsquo;re a Headser, the number you enter is your confidence that the coin landed heads on the first toss. If you&rsquo;re a Tailser, it&rsquo;s your confidence that the coin landed <em>tails</em>&mdash;so the widget will subtract it from 100 to yield your confidence that the coin landed <em>heads.<span>&nbsp;</span></em>(If you&rsquo;re 60% confident that the coin landed tails, that means you&rsquo;re 100&ndash;60 = 40% confident that it landed heads.)<br /><br />You&rsquo;ll do this whole procedure twice&mdash;once for each toss of the coin. Then the widget will tell you what your <em>average confidence in heads</em> was, across the two tosses. This is how confident you should be that the Random Toss landed heads, given your confidence in each individual toss. And this average is the number that will polarize across the two groups.<br /><br />Enough set up; time to do the tasks.<ul><li>If you&rsquo;re a <strong><a href="https://www.kevindorst.com/hser.html" target="_blank">Headser, click here</a>.</strong>&nbsp;</li><li>If you&rsquo;re a <strong><a href="https://www.kevindorst.com/tser.html" target="_blank">Tailser, click here</a></strong>.</li></ul></div>  <div class="paragraph">Welcome back.<span>&nbsp; </span>You have now, I predict, been polarized.<br /><br />This is a statistical process, so your individual experience may differ. But my guess is that if you&rsquo;re a Headser, your average confidence in heads was <em>greater</em> than 50%; and if you&rsquo;re<span>&nbsp; </span>Tailser, your average confidence in heads was <em>less</em> than 50%.<br /><br />I&rsquo;ve run this study. Participants were divided into Headsers and Tailsers. They each saw four word-completion tasks. Here&rsquo;s how the two groups&rsquo; average confidence in heads (i.e. their confidence that the Random Toss landed heads) evolved as they saw more tasks:</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/9-12-amb-hser-tser_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%">The average confidence in heads of each group after seeing 0&ndash;4  independent word-completion tasks. Blue is Headsers; orange is Tailsers. Bars represent 95% confidence intervals for the mean of each group&rsquo;s average confidence at each stage.</div> </div></div>  <div class="paragraph">Both groups started out 50% confident on average, but the more tasks they saw, the more this diverged. By the end, the average Headser was 58% confident that the Random Toss landed heads, while the average Tailser was 36% confident of it.<br /><br />(That difference is statistically significant; the 95%-confidence interval for the mean difference between the groups&rsquo; final average confidence in heads is [16.02, 26.82]; the Cohen&rsquo;s d effect size is 1.58&mdash;usually 0.8 is considered &ldquo;large&rdquo;. For a full statistical report, including comparison to a control group with unambiguous evidence, see the <a href="https://www.kevindorst.com/reasonably-polarized-technical-appendix.html" target="_blank">Technical Appendix</a>.)<br /><br />Upshot: the more word-completion tasks Headsers and Tailsers see, the more they polarize.<br />&#8203;<br /><strong>The crucial question:</strong> <em>Why?</em></div>  <div class="paragraph" style="text-align:right;"><font color="#818181">(800 words left)</font></div>  <div class="paragraph">Getting a complete answer to this&mdash;and to why such polarization should be considered&nbsp;<em>rational</em>&mdash;will take us a couple more weeks.<span>&nbsp; </span>But the basic idea is simple enough.<br /><br />&#8203;A word-completion task presents you with evidence that is <strong>asymmetrically ambiguous</strong>. It&rsquo;s easier to know what to think if there <em>is</em> a completion than if there&rsquo;s <em>no</em> completion.<span>&nbsp;</span>If there <em>is</em> a completion, all you have to do is find one, and you know what to think.<span>&nbsp; </span>But if there&rsquo;s <em>no</em> completion, then you can&rsquo;t find one; but nor can you be certain there is <em>none</em>&mdash;for you can&rsquo;t rule out the possibility that there&rsquo;s one you&rsquo;ve missed.<br /><br />Staring at `_E_RT&rsquo;, you may be struck by a moment of epiphany&mdash;`HEART!&rsquo;&mdash;and thereby get unambiguous evidence that the string is completable.<br /><br />But staring at `ST_ _RE&rsquo;, no such epiphany is forthcoming; the best you&rsquo;ll get is a sense that it&rsquo;s probably not completable, since you haven&rsquo;t yet found one.<span>&nbsp; </span>Nevertheless, you should remain unsure whether you&rsquo;ve made a mistake: &ldquo;Maybe it <em>does</em> have a completion and I should know it; maybe in a second, I&rsquo;ll think to myself, `It <em>is</em> completable&mdash;duh!&rsquo;&rdquo;<span>&nbsp; </span>This self-doubt is the sign of ambiguous evidence, and it prevents you from being too confident that it&rsquo;s not completable.<br /><br />The result? When you&rsquo;re presented with a string that&rsquo;s completable, you often get strong, unambiguous evidence that it&rsquo;s completable; when you&rsquo;re presented with a string that&rsquo;s <em>not</em> completable, you can only get <em>weak, ambiguous</em> evidence that it&rsquo;s not. Thus when the string is completable, you should often be quite confident that it is; when it&rsquo;s not, you should never be very confident that it&rsquo;s not.<br /><br />I polarized you by exploiting this asymmetry. Headsers saw completable strings when the coin landed heads; Tailsers saw them when it landed tails. That means that Headsers were good at recognizing heads-cases and bad at recognizing tails-cases, while Tailsers were good at recognizing tails-cases and bad at recognizing heads-cases.<br /><br />As a result, if you ask Headsers, they&rsquo;ll say, &ldquo;It&rsquo;s landed heads a lot!&rdquo;; and if you ask Tailsers, they&rsquo;ll say, &ldquo;It&rsquo;s landed tails a lot!&rdquo;. They polarize.<br /><br />Here it&rsquo;s worth emphasizing a subtle point but important point&mdash;one that we&rsquo;ll return to. The <strong>ambiguous/unambiguous-evidence distinction</strong> is <em>not</em> the <strong>weak/strong-evidence distinction</strong>. Ambiguous evidence is evidence <em>that you should be unsure how to react to</em>; unambiguous evidence is evidence that you <em>should</em> be sure how to react to.<br /><br />Ambiguous evidence is necessarily weak, but unambiguous evidence can be weak too. Example: if I tell you I&rsquo;m about to toss a coin that&rsquo;s 60% biased towards heads, that is <em>weak but unambiguous</em> evidence--<em>weak</em> because you shouldn&rsquo;t be very confident it&rsquo;ll land heads, but <em>unambiguous</em> because you know exactly how confident to be (namely, 60%).<br /><br />The claim is that it is asymmetries in <em>ambiguity</em>&mdash;not asymmetries in <em>strength</em>&mdash;which drive polarization. We can test this by comparing our ambiguous-evidence Headsers and Tailsers to a control group that received evidence that was sometimes strong and sometimes weak, but always (relatively)&nbsp;<em>un</em>ambiguous. (It came in the form of draws from an urn; see the <a href="https://www.kevindorst.com/reasonably-polarized-technical-appendix.html" target="_blank">Technical Appendix</a> for details.)<br /><br />Here is the evolution of the Headsers and Tailsers who got <strong><em>un</em>ambiguous evidence</strong>:</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/9-12-unam-hser-tser_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%">The average confidence in heads of each group after seeing 0&ndash;4 independent draws from an urn. Blue is Headsers; orange is Tailsers. Bars represent 95% confidence intervals for the mean of each group&rsquo;s average confidence at each stage.</div> </div></div>  <div class="paragraph"><br />As you can see, there is some divergence (most liked a &ldquo;response bias&rdquo; because of the phrasing of the questions), but significantly less divergence than in our ambiguous-evidence case. (Again, see the <a href="https://www.kevindorst.com/reasonably-polarized-technical-appendix.html" target="_blank">technical appendix</a> for a statistical comparison.)<br /><br />Upshot: ambiguous evidence can be used to drive polarization.<span>&nbsp;</span><br /><br />That concludes my possibility proof. The rest of this project will try to figure out what it means.<span>&nbsp; </span>To do that, we need to look further into both the theoretical foundations and the real-world applications.<br /><br />To preview the foundations: there is a clear sense which <strong>both Headsers and Tailsers are succeeding at getting to the truth of the matter</strong>&mdash;for each coin flip, they each tend to get more accurate about how it landed. The trouble is that this accuracy is asymmetric, and as a result they end up with very different <em>overall</em> pictures of the outcomes of the series of coin flips.<br /><br />To preview the applications: <strong>these ambiguity-asymmetries can be exploited</strong>. Fox News can spin its coverage so that information that favors Trump is unambiguous, while that which disfavors him is ambiguous. MSNBC can do the opposite.<span>&nbsp; </span>So when we divide into those-who-watch-Fox and those-who-watch-MSNBC, we are, in effect, dividing ourselves into Headsers and Tailsers.<span>&nbsp; </span>As a result, although we are getting <em>locally</em> more informed whenever we tune into these programs, our <em>global</em> pictures of Trump are getting pulled further and further apart.<br /><br />In fact, the same sort of ambiguity-asymmetry plays out in <em>many</em> different settings&mdash;helping explain why group discussions push people to extremes, why individuals favor news sources that agree with their views, and why partisans interpret shared information in radically different ways.<br /><br />That&rsquo;s where we&rsquo;re headed.<span>&nbsp;</span>To get there, we need to get a few more basics on the table. Empirically: what mechanisms have led to the recent rise in polarization? And theoretically: what would it <em>mean</em> for this polarization&mdash;in our Headser/Tailser game, and in real life&mdash;to be &ldquo;rational&rdquo;?<br /><br />We&rsquo;ll tackle those two questions in the coming weeks.<span>&nbsp;</span>Then we&rsquo;ll put all the pieces together, and examine the role that ambiguity-asymmetries in evidence play in the mechanisms that drive polarization.<br /><br /><br />What next?<br /><strong>If you liked this post,</strong> consider <a href="https://mailchi.mp/279517050568/stranger_apologies_signup">signing up</a> for the newsletter, <a href="https://twitter.com/kevin_dorst">following me on Twitter</a>, or <a href="https://twitter.com/kevin_dorst/status/1302251551587790848">spreading the word</a>.<br /><strong style="color:rgb(42, 42, 42)">For a full statistical analysis of the experiment</strong><span style="color:rgb(42, 42, 42)">, see the </span><a href="https://www.kevindorst.com/reasonably-polarized-technical-appendix.html" target="_blank">technical appendix</a><span style="color:rgb(42, 42, 42)">.&nbsp; No doubt those with more experimental expertise can find ways it could be improved&ndash;&ndash;suggestions most welcome!</span><br /><strong style="color:rgb(42, 42, 42)"><a href="https://www.kevindorst.com/stranger_apologies/how-we-polarized" target="_blank">Next post:</a></strong><span style="color:rgb(42, 42, 42)"> I&rsquo;ll sketch a story of the empirical mechanisms that drive polarization; with that on the table,&nbsp;we'll move on to evaluating them normatively.</span><br /><br /><br />PS. Thanks to <a href="https://fitelson.org/" target="_blank">Branden Fitelson</a> and especially <a href="http://campuspress.yale.edu/joshuaknobe/" target="_blank">Joshua Knobe</a>&nbsp;for much help with the experimental design and analysis.&nbsp;&nbsp;</div>]]></content:encoded></item><item><title><![CDATA[Reasonably Polarized: Why politics is more rational than you think.]]></title><link><![CDATA[https://www.kevindorst.com/stranger_apologies/rp]]></link><comments><![CDATA[https://www.kevindorst.com/stranger_apologies/rp#comments]]></comments><pubDate>Sat, 05 Sep 2020 07:52:19 GMT</pubDate><category><![CDATA[All Most Read]]></category><category><![CDATA[Polarization]]></category><category><![CDATA[Reasonably Polarized series]]></category><guid isPermaLink="false">https://www.kevindorst.com/stranger_apologies/rp</guid><description><![CDATA[(1700 words; &#8203;8 minute read.)  [9/4/21 update: if you'd like to see the rigorous version of this whole blog series, check out the paper on "Rational Polarization" I just posted.]&#8203;  &#8203;&#8203;&#8203;A Standard StoryI haven&rsquo;t seen Becca in a decade. I don&rsquo;t know what she thinks about Trump, or Medicare for All, or defunding the police.But I can guess.&#8203;&#8203;Becca and I grew up in a small Midwestern town. Cows, cornfields, and college football. Both of us were mod [...] ]]></description><content:encoded><![CDATA[<div class="paragraph"><font color="#818181">(1700 words; &#8203;8 minute read.)</font></div>  <div class="paragraph"><font color="#818181">[<strong style="">9/4/21 update:</strong> if you'd like to see the rigorous version of this whole blog series, check out the paper on<a href="https://philpapers.org/rec/DORRP-2" target="_blank" style=""> "Rational Polarization"</a> I just posted.]</font><br />&#8203;</div>  <div class="paragraph"><font color="#626262">&#8203;&#8203;&#8203;</font><u style="color:rgb(42, 42, 42)"><strong>A Standard Story</strong></u><br /><span style="color:rgb(42, 42, 42)">I haven&rsquo;t seen Becca in a decade. I don&rsquo;t know what she thinks about Trump, or Medicare for All, or defunding the police.</span><br /><br /><span style="color:rgb(42, 42, 42)">But I can guess.</span><br /><br /><span style="color:rgb(42, 42, 42)">&#8203;&#8203;</span>Becca and I grew up in a small Midwestern town. Cows, cornfields, and college football. Both of us were moderate in our politics; she a touch more conservative than I&mdash;but it hardly mattered, and we hardly noticed.<br /><br />After graduation, we went our separate ways. I, to a liberal university in a Midwestern city, and then to graduate school on the East Coast. She, to a conservative community college, and then to settle down in rural Missouri.<br /><br />I&ndash;&ndash;<span>of course&ndash;&ndash;</span>became increasingly liberal. I came to believe that gender roles are oppressive, that racism is systemic, and that our national myths let the powerful paper over the past.<br /><br />And Becca?</div>  <div>  <!--BLOG_SUMMARY_END--></div>  <div class="paragraph"><span style="color:rgb(42, 42, 42)">You and I can both guess how her story differs. She&rsquo;s probably more concerned by shifting gender norms than by the long roots of sexism; more worried by rioters in Portland than by police shootings in Ferguson; and more convinced of America&rsquo;s greatness than of its deep flaws.</span><br /><br /><span style="color:rgb(42, 42, 42)">&#8203;In short: we started with similar opinions, set out on different life trajectories, and, 10 years down the line, we deeply disagree.<br /><br /></span><span style="color: rgb(0, 0, 0); background-color: transparent;">So far, so familiar. The story of me and Becca is one tiny piece of the modern American story: one of </span><a href="https://www.pewresearch.org/wp-content/uploads/sites/4/2014/06/6-12-2014-Political-Polarization-Release.pdf" style="background-color: transparent;">pervasive&mdash;and increasing&mdash;political polarization</a><span style="color: rgb(0, 0, 0); background-color: transparent;">.<br /><br /></span><span style="color: rgb(0, 0, 0); background-color: transparent;">It&rsquo;s often noted that this polarization is </span><strong style="color: rgb(0, 0, 0); background-color: transparent;"><em>profound</em></strong><span style="color: rgb(0, 0, 0); background-color: transparent;">: partisans now disagree so much that they often </span><a href="https://thenewpress.com/books/strangers-their-own-land" style="background-color: transparent;">struggle to understand each other</a><span style="color: rgb(0, 0, 0); background-color: transparent;">.<br /><br /></span><span style="color: rgb(0, 0, 0); background-color: transparent;">It&rsquo;s often noted that this polarization is </span><strong style="color: rgb(0, 0, 0); background-color: transparent;"><em>persistent</em></strong><span style="color: rgb(0, 0, 0); background-color: transparent;">: when partisans sit down to talk about their now-opposed beliefs, </span><a href="https://www.pewresearch.org/politics/2016/06/22/3-partisan-environments-views-of-political-conversations-and-disagreements/#many-find-political-discussions-with-opponents-stressful" style="background-color: transparent;">they rarely rethink or revise them.<br /><br /></a><span style="color: rgb(0, 0, 0); background-color: transparent;">But what&rsquo;s rarely emphasized is that this polarization is </span><strong style="color: rgb(0, 0, 0); background-color: transparent;"><em>predictable</em></strong><span style="color: rgb(0, 0, 0); background-color: transparent;">: people setting out on different life trajectories </span><a href="https://www.amazon.co.uk/Going-Extremes-Minds-Unite-Divide/dp/0195378016" style="background-color: transparent;"><em>can see all this coming</em></a><span style="color: rgb(0, 0, 0); background-color: transparent;">. When Becca and I said goodbye in the summer of 2010, we both suspected that we wouldn&rsquo;t be coming back. That when we met again, our disagreements would be larger. That we&rsquo;d understand each other less, trust each other less, like each other less.<br /><br /></span><span style="color: rgb(0, 0, 0); background-color: transparent;">And we were right.</span><span style="color: rgb(0, 0, 0); background-color: transparent;">&nbsp; </span><span style="color: rgb(0, 0, 0); background-color: transparent;">That&rsquo;s why I haven&rsquo;t seen her in a decade.<br /><br /></span><span style="color: rgb(0, 0, 0); background-color: transparent;">Told this way, the story of polarization raises questions that are both political and personal. What should I now think of Becca&mdash;and of myself</span><em style="color: rgb(0, 0, 0); background-color: transparent;">?</em><span style="color: rgb(0, 0, 0); background-color: transparent;"> How should I reconcile the strength of my current beliefs with the fact that they were utterly predictable? And what should I reach for to explain how I came to disagree so profoundly with my old friends?<br /><br /></span><span style="color: rgb(0, 0, 0); background-color: transparent;">The standard story:</span> <strong style="color: rgb(0, 0, 0); background-color: transparent;"><em>irrationality</em></strong><span style="color: rgb(0, 0, 0); background-color: transparent;">.<br /><br /></span><span style="color: rgb(0, 0, 0); background-color: transparent;">The story says, in short, that </span><a href="https://www.vox.com/2014/4/6/5556462/brain-dead-how-politics-makes-us-stupid" style="background-color: transparent;">politics makes us stupid</a><span style="color: rgb(0, 0, 0); background-color: transparent;">. That despite our best intentions, we </span><a href="https://www.huffpost.com/entry/political-polarization-is-a-psychology-problem_b_5a01dd9ee4b07eb5118255e5?guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&amp;guce_referrer_sig=AQAAAFaQ8oEzibBubeaz3Ai62tPyY1GWBVrJ4q2UUhqwcEOXgmeOISzrY06hciO9rWeloWrFZ4OUJQRW3v8Fp_ijiv0k6a33hPX7cMlbFx9jLQ_yDHznDPl72s8p6FmKf2dJPRV9ZCvXWZD5bma-ZisvtSoBlBwCGo8zjV6hfzD4Br92&amp;guccounter=2" style="background-color: transparent;">glom onto the beliefs of our peers</a><span style="color: rgb(0, 0, 0); background-color: transparent;">, interpret information in </span><a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1540-5907.2006.00214.x" style="background-color: transparent;">biased ways</a><span style="color: rgb(0, 0, 0); background-color: transparent;">, defend our beliefs </span><a href="https://www.dan.sperber.fr/wp-content/uploads/2009/10/MercierSperberWhydohumansreason.pdf" style="background-color: transparent;">as if they were cherished possessions</a><span style="color: rgb(0, 0, 0); background-color: transparent;">, and thus </span><a href="https://www.aeaweb.org/articles?id=10.1257/aer.20130921" style="background-color: transparent;">wind up wildly overconfident</a><span style="color: rgb(0, 0, 0); background-color: transparent;">. You&rsquo;ve probably heard the buzzwords: </span><a href="https://journals.sagepub.com/doi/abs/10.1037/1089-2680.2.2.175" style="background-color: transparent;">&ldquo;confirmation bias&rdquo;</a><span style="color: rgb(0, 0, 0); background-color: transparent;">, </span><a href="https://psycnet.apa.org/doiLanding?doi=10.1037%2F0022-3514.50.6.1141" style="background-color: transparent;">&ldquo;the group-polarization effect&rdquo;</a><span style="color: rgb(0, 0, 0); background-color: transparent;">, </span><a href="https://psycnet.apa.org/doiLanding?doi=10.1037/0033-2909.108.3.480" style="background-color: transparent;">&ldquo;motivated reasoning&rdquo;</a><span style="color: rgb(0, 0, 0); background-color: transparent;">, </span><a href="https://learnmoore.org/mooredata/HOC.pdf" style="background-color: transparent;">&ldquo;the overconfidence effect&rdquo;</a><span style="color: rgb(0, 0, 0); background-color: transparent;">, and so on.<br /><br /></span><span style="color: rgb(0, 0, 0); background-color: transparent;">This irrationalist picture of human nature has quite the pedigree&mdash;it has won</span><span style="color: rgb(0, 0, 0); background-color: transparent;">&nbsp; </span><a href="https://www.nobelprize.org/prizes/economic-sciences/2002/kahneman/facts/" style="background-color: transparent;">Nobel Prize</a><a href="https://www.nobelprize.org/prizes/economic-sciences/2017/thaler/facts/" style="background-color: transparent;">s</a><span style="color: rgb(0, 0, 0); background-color: transparent;">, started </span><a href="https://en.wikipedia.org/wiki/Behavioral_economics" style="background-color: transparent;">academic subfields</a><span style="color: rgb(0, 0, 0); background-color: transparent;">, and </span><a href="https://www.amazon.co.uk/Thinking-Fast-Slow-Daniel-Kahneman/dp/0141033576" style="background-color: transparent;">embedded itself firmly in the popular imagination</a><span style="color: rgb(0, 0, 0); background-color: transparent;">.<br /><br /></span><span style="color: rgb(0, 0, 0); background-color: transparent;">When combined with a new wave of research on the </span><a href="https://www.salon.com/2007/11/07/sunstein/" style="background-color: transparent;">informational traps of the modern internet</a><span style="color: rgb(0, 0, 0); background-color: transparent;">, the standard story offers a simple explanation for why political polarization has exploded: our biases have led us to mis-use our new informational choices. Again, you&rsquo;ve probably heard the buzzwords: </span><a href="https://smile.amazon.com/Echo-Chamber-Limbaugh-Conservative-Establishment/dp/0195366824/ref=tmm_hrd_swatch_0?_encoding=UTF8&amp;qid=&amp;sr=" style="background-color: transparent;">&ldquo;echo chambers&rdquo;</a><span style="color: rgb(0, 0, 0); background-color: transparent;">, </span><a href="https://books.google.co.uk/books/about/The_Filter_Bubble.html?id=Qn2ZnjzCE3gC&amp;redir_esc=y" style="background-color: transparent;">&ldquo;filter bubbles&rdquo;</a><span style="color: rgb(0, 0, 0); background-color: transparent;">, </span><a href="https://science.sciencemag.org/content/359/6380/1094" style="background-color: transparent;">&ldquo;fake news&rdquo;</a><span style="color: rgb(0, 0, 0); background-color: transparent;">, </span><a href="https://press.princeton.edu/books/hardcover/9780691175515/republic" style="background-color: transparent;">&ldquo;the Daily Me&rdquo;</a><span style="color: rgb(0, 0, 0); background-color: transparent;">, and so on.<br /><br /></span><span style="color: rgb(0, 0, 0); background-color: transparent;">The result?</span><span style="color: rgb(0, 0, 0); background-color: transparent;">&nbsp; </span><span style="color: rgb(0, 0, 0); background-color: transparent;">A bunch of pig-headed people who increasingly think that they are right and balanced, while the other side is wrong and biased.<br /><br /></span><span style="color: rgb(0, 0, 0); background-color: transparent;">It&rsquo;s a striking story. But it doesn&rsquo;t work.<br /><br /></span><span style="color: rgb(0, 0, 0); background-color: transparent;">It says that polarization is predictable because </span><em style="color: rgb(0, 0, 0); background-color: transparent;">irrationality</em><span style="color: rgb(0, 0, 0); background-color: transparent;"> is predictable: that Becca and I knew that, due to our biases, I&rsquo;d get enthralled by liberal professors and she&rsquo;d get taken in by conservative preachers.<br /><br /></span><span style="color: rgb(0, 0, 0); background-color: transparent;">But that&rsquo;s wrong. When I looked ahead in 2010, I didn&rsquo;t see systematic biases leading to the changes in my opinions. And looking back today, I don&rsquo;t see them now.<br /><br /></span><span style="color: rgb(0, 0, 0); background-color: transparent;">If I </span><em style="color: rgb(0, 0, 0); background-color: transparent;">did</em><span style="color: rgb(0, 0, 0); background-color: transparent;"> see them, then I&rsquo;d give up those opinions. For no one thinks to themselves, &ldquo;Gender roles are oppressive, racism is systemic, and national myths are lies&mdash;but the reason I believe all that is that I interpreted evidence in a biased and irrational way.&rdquo; More generally: it&rsquo;s </span><a href="https://onlinelibrary.wiley.com/doi/10.1111/nous.12026" style="background-color: transparent;">incoherent to believe that your own beliefs are irrational</a><span style="color: rgb(0, 0, 0); background-color: transparent;">. Therefore, so long as we hold onto our political beliefs, we </span><em style="color: rgb(0, 0, 0); background-color: transparent;">can&rsquo;t</em><span style="color: rgb(0, 0, 0); background-color: transparent;"> think that they were formed in a systematically irrational way.<br /><br /></span><span style="color: rgb(0, 0, 0); background-color: transparent;">So I don&rsquo;t see systematic irrationality in my past. Nor do I suspect it in Becca&rsquo;s. She was just as sharp and critically-minded as I was; if conservative preachers changed her mind, it was not for a lack of rationality.<br /><br /></span><span style="color: rgb(0, 0, 0); background-color: transparent;">It turns out that tellers of the irrationalist tale must agree. For despite the many controversies surrounding political (ir)rationality, one piece of common ground is that both sides are </span><a href="https://www.theatlantic.com/magazine/archive/2011/12/i-was-wrong-and-so-are-you/308713/" style="background-color: transparent;"><em>equally</em> susceptible to the factors that lead to polarization</a><span style="color: rgb(0, 0, 0); background-color: transparent;">. As far as the psychological evidence is concerned, the &ldquo;other side&rdquo; is no less rational than you&ndash;&ndash;so if you don&rsquo;t blame </span><em style="color: rgb(0, 0, 0); background-color: transparent;">your</em><span style="color: rgb(0, 0, 0); background-color: transparent;"> beliefs on irrationality (as you can&rsquo;t), then you shouldn&rsquo;t blame theirs on it either.<br /><br /></span><span style="color: rgb(0, 0, 0); background-color: transparent;">In short: given that we can&rsquo;t believe that our own beliefs are irrational, the irrationalist explanation of polarization falls apart.<br /><br /></span><span style="color: rgb(0, 0, 0); background-color: transparent;">Suppose you find this argument convincing. Even so, you may find yourself puzzled. After all: what could explain our profound, persistent, and predictable polarization, if not for irrationality? As we&rsquo;ll see, there&rsquo;s a genuine philosophical puzzle here. And when we can&rsquo;t see our way to the solution, it&rsquo;s very natural to fall back on irrationalism.<br /><br /></span><span style="color: rgb(0, 0, 0); background-color: transparent;">In particular: since we can&rsquo;t view our own beliefs as irrational, it&rsquo;s natural to instead blame polarization on the </span><em style="color: rgb(0, 0, 0); background-color: transparent;">other side&rsquo;s</em><span style="color: rgb(0, 0, 0); background-color: transparent;"> irrationality: &ldquo;I can&rsquo;t understand how </span><em style="color: rgb(0, 0, 0); background-color: transparent;">rational</em><span style="color: rgb(0, 0, 0); background-color: transparent;"> people could see Trump so differently. But </span><em style="color: rgb(0, 0, 0); background-color: transparent;">I&rsquo;m</em><span style="color: rgb(0, 0, 0); background-color: transparent;"> not irrational&mdash;so the irrational ones must be Becca and her conservative friends, right?&rdquo;<br /><br /></span><span style="color: rgb(0, 0, 0); background-color: transparent;">That thought turns our disagreement into something more. Not only do we think the other side is wrong&mdash;we now think they are </span><em style="color: rgb(0, 0, 0); background-color: transparent;">irrational</em><span style="color: rgb(0, 0, 0); background-color: transparent;">. Or </span><em style="color: rgb(0, 0, 0); background-color: transparent;">biased</em><span style="color: rgb(0, 0, 0); background-color: transparent;">. Or </span><em style="color: rgb(0, 0, 0); background-color: transparent;">dumb</em><span style="color: rgb(0, 0, 0); background-color: transparent;">. And </span><a href="https://www.pewresearch.org/politics/2016/06/22/1-feelings-about-partisans-and-the-parties/https://www.pewresearch.org/politics/2016/06/22/1-feelings-about-partisans-and-the-parties/" style="background-color: transparent;"><em>that</em> process of demonization</a><span style="color: rgb(0, 0, 0); background-color: transparent;">&mdash;more than anything else&mdash;is the sad story of American polarization.</span><br /></div>  <div class="paragraph" style="text-align:right;"><span style="color:rgb(0, 0, 0)">&nbsp;&nbsp;</span><span style="color:rgb(127, 127, 127)">(650 words left)</span></div>  <div class="paragraph"><u style="color:rgb(42, 42, 42)"><span style="color:rgb(0, 0, 0)"><strong>A Reasonable Story</strong></span></u><span style="color:rgb(0, 0, 0)">&nbsp; &nbsp; &nbsp;</span><br />What if it need not be so? What if we could think the other side is wrong, and not think they are dumb? What if we could tell a story on which diverging life trajectories can lead&nbsp;<em>rational</em>&nbsp;people&mdash;ones who care about the truth&mdash;to be persistently, profoundly, and predictably polarized?<br /><br />That&rsquo;s what I&rsquo;m going to do. I&rsquo;m going to show how findings from psychology, political science, and philosophy allow us to see polarization as the result of reasonable people doing the best they can with the information they have. To argue that the fault lies not in ourselves, but in the systems we inhabit.&nbsp;&nbsp;And to paint a picture on which our polarized politics consists largely of individually rational actors, collectively acting out a tragedy.<br /><br /><strong>Here is the key.&nbsp;</strong>When evidence is&nbsp;<em>ambiguous&ndash;&ndash;</em>when it is hard to know how to interpret it&mdash;it can lead rational people to predictably polarize.<br /><br />This is a theorem in standard (i.e. Bayesian) models of rational belief. It makes concrete and confirmed empirical predictions. And it offers a unified explanation of our buzzwords: confirmation bias, the group-polarization effect, motivated reasoning, and the overconfidence effect are all to be expected from rational people who care about the truth but face systematically ambiguous evidence.<br /><br />More than that: this story explains why polarization has exploded in recent decades. Changes in our social and informational networks have made is so that, with increasing regularity, the evidence we receive&nbsp;<em>in favor</em>&nbsp;of our political beliefs tends to be&nbsp;<em>un</em>ambiguous and therefore strong&mdash;while that we receive&nbsp;<em>against</em>&nbsp;them tends to be ambiguous and therefore weak. The rise in this systematic asymmetry is what explains the rise in polarization.<br /><br /><strong>In short:</strong>&nbsp;the standard story is&nbsp;<em>right</em>&nbsp;about which mechanisms lead people to polarize, but&nbsp;<em>wrong</em>&nbsp;about what this means about people. People polarize because they look at information that confirms their beliefs and talk to people that are like-minded. But they do these things&nbsp;<em>not</em>&nbsp;because they are irrational, biased, or dumb. They do them because it is the best way to navigate the landscape of complex, ambiguous evidence that pervades our politics.<br /><br />That&rsquo;s the claim going to defend over the coming weeks.<br /><br />Here&rsquo;s how. I&rsquo;ll start with a possibility proof&mdash;a simple demonstration of how ambiguous evidence can lead rational people to predictably polarize. Our goal will then be to figure out what this demonstration tells us about real-world polarization.<br /><br />To do that, we need to dive into both the empirical and theoretical details. In what sense has the United States become increasingly &rdquo;polarized&rdquo;&mdash;and why? What would it mean for this polarization to be &ldquo;rational&rdquo;&mdash;and how could ambiguous evidence make it so? How does such evidence explain the mechanisms that drive polarization&mdash;and what, therefore, might we do about them? I&rsquo;ll do my best to give answers to each of these questions.<br /><br />This, obviously, is a big project. That means two things.<br /><br />First, it means I&rsquo;m going to present it in two streams. The core will be this blog, which will explain the main empirical and conceptual ideas in an intuitive way. In parallel, I&rsquo;ll post an expanding&nbsp;<strong><a href="https://www.kevindorst.com/reasonably-polarized-technical-appendix.html">technical appendix</a></strong>&nbsp;that develops the details underlying each stage in the argument.<br /><br />Second, it means that this is a work in progress.&nbsp;&nbsp;It&rsquo;ll eventually be a book, but&mdash;though I&rsquo;ve been working on it for years&mdash;getting to a finished product will be a long process. This blog is my way of nudging it along.<br /><br />That means I want your help! The more feedback I get, the better this project will become. I want to know which explanations do (and don&rsquo;t) makes sense; what strands of argument are (and are not) compelling; and&mdash;most of all&mdash;what you find of value (or not) in the story I&rsquo;m going to tell.<br /><br />So please: send me your questions, reactions, and suggestions.<br /><br />And together, I hope, we can figure out what happened between me and my old friends&mdash;and, maybe, what happened between you and yours.<br /><br /><br />What next?<br /><strong>If you&rsquo;re interested in this project,</strong>&nbsp;consider <a href="https://mailchi.mp/279517050568/stranger_apologies_signup" target="_blank">signing up</a> for the newsletter, <a href="https://twitter.com/kevin_dorst" target="_blank">following me on Twitter</a>, or <a href="https://twitter.com/kevin_dorst/status/1302251551587790848" target="_blank">spreading the word</a>.<br /><strong><a href="https://www.kevindorst.com/stranger_apologies/how-to-polarize-rational-people" target="_blank">Next post:</a></strong>&nbsp;an experiment that demonstrates how ambiguous evidence can lead people to polarize.<br /><br />&#8203;<br />PS. Thanks to <a href="http://web.mit.edu/cdg/www/" target="_blank">Cosmo Grant</a>, <a href="http://www.rachelelizabethfraser.com/" target="_blank">Rachel Fraser</a>, and <a href="https://www.gingerschultheis.com/" target="_blank">Ginger Schultheis</a> for helpful feedback on previous drafts of this post&mdash;and to <a href="https://www.liamkofibright.com/" target="_blank">Liam Kofi Bright</a>, <a href="http://cailinoconnor.com/" target="_blank">Cailin O'Connor</a>, <a href="http://www.kevinzollman.com/" target="_blank">Kevin Zollman</a>, and especially <a href="https://philosophy.uchicago.edu/faculty/a-callard" target="_blank">Agnes Callard</a> for much help and advice getting this project off the ground.</div>]]></content:encoded></item><item><title><![CDATA[How (Not) to Test for Algorithmic Bias (Guest Post)]]></title><link><![CDATA[https://www.kevindorst.com/stranger_apologies/how-not-to-test-for-algorithmic-bias-guest-post]]></link><comments><![CDATA[https://www.kevindorst.com/stranger_apologies/how-not-to-test-for-algorithmic-bias-guest-post#comments]]></comments><pubDate>Sat, 22 Aug 2020 09:19:04 GMT</pubDate><category><![CDATA[Uncategorized]]></category><guid isPermaLink="false">https://www.kevindorst.com/stranger_apologies/how-not-to-test-for-algorithmic-bias-guest-post</guid><description><![CDATA[This is a guest post by Brian Hedden (University of Sydney).(3000 words; 14 minute read)  Predictive and decision-making algorithms are playing an increasingly prominent role in our lives. They help determine what ads we see on social media, where police are deployed, who will be given a loan or a job, and whether someone will be released on bail or granted parole. Part of this is due to the recent rise of machine learning. But some algorithms are relatively simple and don&rsquo;t involve any AI [...] ]]></description><content:encoded><![CDATA[<div class="paragraph"><font color="#818181">This is a guest post by <strong><a href="http://brian-hedden.com/" target="_blank">Brian Hedden</a></strong> (University of Sydney).<br />(3000 words; 14 minute read)</font></div>  <div class="paragraph"><br />Predictive and decision-making algorithms are playing an increasingly prominent role in our lives. They help determine what ads we see on social media, where police are deployed, who will be given a loan or a job, and whether someone will be released on bail or granted parole. Part of this is due to the recent rise of machine learning. But some algorithms are relatively simple and don&rsquo;t involve any AI or &lsquo;deep learning.&rsquo;<br /><br />As algorithms enter into more and more spheres of our lives, scholars and activists have become increasingly interested in whether they might be biased in problematic ways. The algorithms behind some facial recognition software <a href="http://openbiometrics.org/publications/klare2012demographics.pdf">are less accurate for women and African Americans</a>. Women are <a href="https://www.andrew.cmu.edu/user/danupam/dtd-pets15.pdf">less likely than men</a> to be shown an ad relating to high-paying jobs on Google. Google Translate <a href="https://www.theverge.com/2018/12/6/18129203/google-translate-gender-specific-translations-languages">translated neutral non-English pronouns into masculine English pronouns</a> in sentences about stereotypically male professions (e.g., &lsquo;he is a doctor&rsquo;).<br /><br />When Alexandria Ocasio-Cortez noted the possibility of algorithms being biased (e.g., in virtue of encoding biases found in their programmers, or the data on which they are trained), Ryan Saavedra, a writer for the conservative Daily Wire, <a href="https://twitter.com/realsaavedra/status/1087627739861897216?lang=en">mocked her</a> on Twitter, writing &ldquo;Socialist Rep. Alexandria Ocasio-Cortez claims that algorithms, which are driven by math, are racist.&rdquo;<br /><br />I think AOC was clearly right and Saavedra clearly wrong. It&rsquo;s true that algorithms do not have inner feelings of prejudice, but that doesn&rsquo;t mean they cannot be racist or biased in other ways.<br /><br />&#8203;But in any particular case, it&rsquo;s tricky to determine whether a given algorithm is in fact biased or unfair. This is largely due to the lack of agreed-upon criteria of algorithmic fairness.<br /></div>  <div>  <!--BLOG_SUMMARY_END--></div>  <div class="paragraph"><span style="color:rgb(42, 42, 42)">This lack of consensus can be usefully illustrated by the controversy over the COMPAS algorithm used to predict recidivism. (It&rsquo;s so famous that the Princeton computer scientist Arvind Narayanan&nbsp;</span><a href="https://youtu.be/jIXIuYdnyyk?t=270">jokes</a><span style="color:rgb(42, 42, 42)">&nbsp;that it&rsquo;s mandatory to mention COMPAS in any discussion of algorithmic fairness!)</span><br /><br /><span style="color:rgb(42, 42, 42)">In a major&nbsp;</span><a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing">report</a><span style="color:rgb(42, 42, 42)">&nbsp;for ProPublica, researchers concluded that COMPAS is &lsquo;biased against blacks,&rsquo; to quote the headline of their article. They reached this conclusion on the grounds that COMPAS yielded a higher false positive rate (non-recidivists incorrectly labelled high-risk) for black people than for white people, and a higher false negative rate (recidivists incorrectly labelled low-risk) for white people than for black people.</span><br /><br /><span style="color:rgb(42, 42, 42)">Northpointe, the company behind COMPAS, responded to ProPublica&rsquo;s charge, noting that COMPAS was equally accurate for black and white people, in the sense that their risk scales had equal AUC&rsquo;s (areas under the ROC curve). (Roughly, the AUC, applied to the case at hand, represents the probability that a random recidivist will be ranked as lower risk than a random non-recidivist. I won&rsquo;t get into the technical details of this concept, but see&nbsp;</span><a href="https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc">here</a><span style="color:rgb(42, 42, 42)">&nbsp;for some background.) And&nbsp;</span><a href="https://heinonline.org/HOL/Page?handle=hein.journals/fedpro80&amp;div=21&amp;g_sent=1&amp;casa_token=cvl3zIsE9HUAAAAA:MtbP8R63ZOeskegyBVyi9T566ffdpB3Dbk7Zl_aw4d0dVNVKEOjahkUPzM0n5adEWBCvh-VoCpk">Flores, Bechtel, and Lowenkamp</a><span style="color:rgb(42, 42, 42)">&nbsp;defended COMPAS on the grounds that, for each possible risk score, the percentage of those assigned that risk score who went on to recidivate was roughly the same for black and for white people.</span><br /><br /><span style="color:rgb(42, 42, 42)">It seems that ProPublica was tying fairness to one set of criteria, while Northpointe and Flores et al. were tying fairness to a different set of criteria. How should we decide which side was right? How should we decide whether COMPAS was really unfair or biased against black people? More generally, how should we decide whether an algorithm is unfair or biased?</span><br /><br /><span style="color:rgb(42, 42, 42)">&#8203;Before jumping into this discussion, it&rsquo;s worth pointing out that the debate over algorithmic fairness also bears on the fairness of human predictions and decisions. We can, after all, think of human prediction and decision-making as based on an underlying algorithm. And some possible criteria for what it takes for an algorithm to be fair, including those we&rsquo;ll focus on below, can be applied to any set of predictions or decisions whatsoever, regardless of the nature of the underlying mechanism that produces them.</span></div>  <div class="paragraph" style="text-align:right;"><font color="#818181">2400 words left</font></div>  <div class="paragraph"><u><strong>Statistical Criteria of Fairness</strong></u><br />Let&rsquo;s focus on algorithms like COMPAS. These algorithms make predictions, not decisions, though of course their predictions might be used to feed into decisions about bail, parole, and the like. The algorithms in question take as input a &lsquo;feature vector&rsquo; (a set of known characteristics) and output either a risk score, or a binary prediction, or both. For simplicity, let&rsquo;s focus on algorithms that output both a real-valued risk score between 0 and 1 and a binary (yes/no) prediction. We can think of the risk score as a probability that the individual will fall into the &lsquo;positive&rsquo; class, and the prediction as akin to a binary belief about whether the individual will be positive or negative.<br /><br />What criteria must a predictive algorithm satisfy in order to qualify as fair and unbiased? Some criteria concern the inner workings of the algorithm. Perhaps a fair algorithm must not use group membership as part of the feature vector upon which its predictions are based. It must be blinded to whether the individual is male or female, black or white, and so on. Perhaps fairness also requires that the algorithm be blinded to any &lsquo;proxies&rsquo; for group membership. For instance, we might regard ZIP code as a proxy for race, given that housing in the US is highly segregated. It is a difficult matter to say in general when some feature counts as a proxy in a problematic sense, but the basic idea is clear enough.<br /><br />Fairness also presumably requires that the algorithm use the same threshold in moving from a risk score to a binary prediction. It would be unfair, for instance, if black people assigned a risk score above 0.7 were predicted to recidivate, while only white people assigned a risk score above 0.8 were predicted to recidivate. These criteria are relatively uncontroversial and relatively easy to satisfy (except for the tricky issue of proxies for group membership). But are there any other criteria that an algorithm must satisfy in order to be fair and unbiased?<br /><br />This post will be concerned with a class of purported fairness criteria that require that certain relations between predictions and actuality be the same across the relevant groups. I&rsquo;ll call these &lsquo;<strong>statistical criteria of fairness</strong>.&rsquo; These are the sorts of criteria that are at issue in the debate over COMPAS. They are of interest in part because we can determine whether they are satisfied by some algorithm just by looking at what it predicted and what actually happened. We don&rsquo;t need to look at the inner workings of the algorithm, which may be proprietary or otherwise opaque. (This opacity is itself a problem, and we should seek as much <a href="https://www.vox.com/recode/2020/2/18/21121286/algorithms-bias-discrimination-facial-recognition-transparency">transparency</a> as possible going forward.)<br /><br />Here are the main statistical criteria of fairness at issue in the debate over COMPAS. See the Appendix for several more that have been considered and discussed in the literature.<br /><br /><span><strong>(1)</strong> <strong>Calibration Within Groups:</strong> For each possible risk score, the percentage of individuals assigned that risk score who are actually positive is the same for each relevant group and equal to that risk score.</span><br /><span><strong>(2) Equal False Positive Rates:</strong> The percentage of actually negative individuals who are falsely predicted to be positive is the same for each relevant group</span><br /><span><strong>(3) Equal False Negative Rates:</strong> The percentage of actually positive individuals who are falsely predicted to be negative is the same for each relevant group.</span><br /><br />It&rsquo;s pretty easy to see why each seems like an attractive criterion of fairness. If an algorithm violates (1) Calibration Within Groups, then it would seem that a given risk score has different evidentiary value for members of different groups. A given risk score would &lsquo;mean&rsquo; different things for different individuals, depending on which group they are members of. If an algorithm violates (2), yielding a higher false positive rate for one group than for another, it&rsquo;s tempting to conclude that it was being more &lsquo;risky,&rsquo; or was jumping to conclusions more quickly, with respect to one group versus another. The same goes if it violates (3), yielding a higher false negative rate for one group than for another. And this seems unfair. It seems to conflict with the idea that individuals should be treated the same by the algorithm, regardless of their group membership.</div>  <div class="paragraph" style="text-align:right;"><font color="#818181">1700 words left</font></div>  <div class="paragraph"><u><strong>Impossibility Theorems</strong></u><br />It would be nice if some algorithms could satisfy all of these criteria. This wouldn&rsquo;t mean that the algorithm is in fact fair. Even if each of these statistical criteria is necessary for fairness, they are not jointly sufficient &ndash; we saw above that there are additional, non-statistical criteria that must be satisfied as well. But still, it would be a promising start if an algorithm could satisfy all of these statistical criteria.<br /><br />But it is impossible for an algorithm to satisfy all of these criteria, except in marginal cases. This is the upshot of a series of impossibility theorems. Two such theorems are particularly important. <a href="https://arxiv.org/abs/1609.05807">Kleinberg et al.</a> prove that no algorithm can jointly satisfy (1) and close relatives of (2) and (3) unless either (i) base rates (i.e. the percentage of individuals who are in fact positive) are equal across the relevant groups, or (ii) prediction is perfect (i.e. the algorithm assigns risk score 1 to all positive individuals and 0 to all negative individuals).<span>&nbsp; </span><a href="https://arxiv.org/abs/1610.07524">Chouldechova</a> proves that no algorithm can jointly satisfy (2) and (3) and a close relative of (1), again unless base rates are equal or prediction is perfect.<br /><br />I won&rsquo;t go through the proofs of these impossibility theorems, but they&rsquo;re not terribly technical. And <a href="https://phenomenalworld.org/analysis/impossible-to-be-fair">here&rsquo;s</a> a great explanation of the theorems and their importance.<br /><br />&#8203;What should we make of these theorems? Pessimistically, we might conclude that fairness dilemmas are all but inevitable; outside of marginal cases, we cannot help but be unfair or biased in some respect.<br /><br />&#8203;More optimistically, we might conclude that some of our criteria are not in fact necessary conditions on algorithmic fairness, and that we need to take a second look and sort the genuine fairness conditions from the specious ones. This is the tack I will take.</div>  <div class="paragraph" style="text-align:right;"><font color="#818181">1400 words left</font></div>  <div class="paragraph"><u><strong>People, Coins, and Rooms</strong></u><br />How could we go about determining which (if any) of the above statistical criteria are genuinely necessary for fairness? One methodology would be to go one-by-one, looking at the motivations behind each criterion, and seeing if those motivations stand up to scrutiny.<br /><br />Another methodology would be to find a perfect, 100% fair algorithm and see if that algorithm can violate any of those criteria. If it can, then this means that the criterion isn&rsquo;t necessary for fairness. (If it can&rsquo;t, this doesn&rsquo;t mean that the criterion&nbsp;<em>is</em>&nbsp;necessary for fairness; perhaps some other 100% fair algorithm can violate it.) But this methodology may seem unpromising. It would be hard, if not impossible, to find any predictive algorithms that everyone agrees is perfectly, 100% fair.<br /><br />At least, that is the case if we consider algorithms that predict important, messy things like recidivism and professional success. But we can do better by considering coin flips; this will enable us to make use of this second methodology.<br /><br />Here is the setup: There are a bunch of people and a bunch of coins of varying biases (i.e. varying chances of landing heads). The people are randomly assigned to one of two rooms, A and B. And the coins are randomly assigned to people. So each person has one coin and is in one of our rooms. Our aim is to predict, for each person, whether her coin will land heads or tails. That is, we are trying to predict, for each person, whether they are a heads person or a tails person. Luckily, each coin comes helpfully labelled with its bias.<br /><br />Here is a perfectly, 100% fair algorithm: For each person, take their coin and read its labelled bias. If the coin reads &lsquo;x&rsquo;, assign it a risk score of x. If x&gt;0.5, make the binary prediction that it will land heads. If x&lt;0.5, make the binary prediction that it will land tails. (I assume, for simplicity, that none of the coins are labelled &lsquo;0.5.&rsquo;)<br /><br />It should be clear that this algorithm is perfectly, 100% fair. This, I think, is bedrock. It&rsquo;s certainly an odd setup, and there&rsquo;s some unfortunate randomness, but there&rsquo;s no unfairness anywhere in the setup&ndash;&ndash;and in particular not in our algorithm itself.<br /><br />Let&rsquo;s see how our criteria shake out with respect to this algorithm. A first thing to note is that criteria (1)-(3) were formulated in terms of what outcomes&nbsp;<em>actually</em>&nbsp;result. But with coin flips, anything can happen. No matter how biased a coin is in favour of heads (short of having heads on both sides), it can still land tails, and vice versa. So it&rsquo;s actually quite easy for our algorithm to violate all of (1)-(3), given the right assignment of coins to people and people to rooms.<br /><br />This suggests that we should have formulated our criteria in expectational or probabilistic terms:<br /><br /><strong>(1*) Expectational Calibration Within Groups:</strong> For each possible risk score, the expected percentage of individuals assigned that risk score who are actually positive is the same for each relevant group and equal to that risk score.<br /><strong>(2*) Expectational Equal False Positive Rates:</strong> The expected percentage of actually negative individuals who are falsely predicted to be positive is the same for each relevant group.<br /><strong>(3*) Expectational Equal False Negative Rates: </strong>The expected percentage of actually positive individuals who are falsely predicted to be negative is the same for each relevant group.<br /><br />(There&rsquo;s a tricky question about how to understand the probability function relative to which these expectations are determined. I&rsquo;ll think of it as an evidential probability function which represents what a rational person who knew about the workings of the algorithm would expect.)<br /><br />We can investigate whether our perfectly, 100% fair algorithm can violate any of these starred criteria by considering a case where coin biases match relative frequencies (i.e. exactly 75% of the coins labelled 0.75 land heads, and so on). If our algorithm violates one of the unstarred criteria in this case, then it also violates the starred version of that criterion.<br /><br />It turns out that our perfectly, 100% fair algorithm must satisfy (1*), but it can violate (2*) and (3*), given the right assignment of coins to people and people to rooms. Moreover, it can violate them&nbsp;<em>simultaneously</em>. And surprisingly, <strong>it can violate them simultaneously&nbsp;<em>even when base rates are equal</em>&nbsp;across the two rooms.<br /><br />T</strong>he following case illustrates this: Room A contains 12 people with coins labelled &lsquo;0.75&rsquo; and 8 people with coins labelled &lsquo;0.125.&rsquo; The former are all assigned risk score 0.75 and predicted to be heads people (positive), and nine of them are in fact heads people. The latter are all assigned risk score 0.125 and predicted to be tails people (negative), and seven of them are in fact tails people. Room B contains 10 people with coins labelled &lsquo;0.6&rsquo; and 10 people with coins labelled &lsquo;0.4.&rsquo; The former are all assigned risk score 0.6 and predicted to be heads people, and six of them are in fact heads people. The latter are all assigned risk score 0.4 and predicted to be tails people, and six of them are in fact tails people.<br /><br />Note that base rates are equal across the two rooms: exactly ten out of the twenty people in each room are heads people.<br /><br />While our algorithm in this case satisfies (1) Calibration Within Groups, and hence also (1*), it violates (2) and (3), and hence also (2*) and (3*). For Room A, the False Positive Rate is 3/10, while for Room B it is 4/10. And for Room A, the False Negative Rate is 1/10, while the False Negative Rate is 4/10. This fair algorithm also violates a host of other statistical criteria of fairness that have been suggested in the literature &ndash; see the Appendix for details.<br /><br />&#8203;This means that it is possible for a perfectly, 100% fair algorithm to violate (2*) and (3*) when given the right population as input. This suffices to show that neither is a necessary condition on fairness. It also suffices to show that none of the other criteria considered in the Appendix are necessary for fairness, either. Only (1*) Expectational Calibration Within Groups is left standing as a plausible necessary condition on fairness.</div>  <div class="paragraph" style="text-align:right;"><font size="3" color="#818181">300 words left</font></div>  <div class="paragraph"><u><strong>Upshots</strong></u><br />I think (1*) Expectational Calibration Within Groups is plausibly necessary for fairness. I also think fairness might require that the &lsquo;inner workings&rsquo; of the algorithm be a certain way, for instance that the algorithm be blinded to group membership and that it use the same threshold in going from a risk score to a binary prediction. There may be other necessary conditions as well.<br /><br />But it is misguided to focus on any of the other statistical criteria of fairness considered here or in the Appendix. Those criteria are tempting due to the relative ease of checking whether they are satisfied. But we have seen that an algorithm&rsquo;s violating those criteria doesn&rsquo;t mean that the algorithm is in any way unfair.<br /><br />Now, even a perfectly fair predictive algorithm might have troubling results when used to make decisions in a certain way. It might have a disparate impact on the relevant groups. But the right way to respond to this disparate impact will often be not to modify the predictive algorithm, but rather to modify the way decisions are made on the basis of its predictions, or to intervene in society in other ways, for instance through reparations, changes in the tax code, and so on.<br /><br />&#8203;Of course, some of these responses might be politically infeasible. This is especially true for some of the policies that might be most effective in redressing racial and other injustices. Reparations would be a case in point. It is difficult to imagine reparations becoming policy, despite Ta Nehisi Coates&rsquo; influential <a href="https://www.theatlantic.com/magazine/archive/2014/06/the-case-for-reparations/361631/">recent defense</a>. If we can&rsquo;t deal with racial (or other) injustices in these other ways, perhaps the best response is to chip away at injustice by modifying what was already a fair predictive algorithm. It&rsquo;s not the ideal solution, but it might be second best. Still, it is important to be clear that an algorithm&rsquo;s violating Equal False Positive/Negative Rates (or any of the other statistical criteria considered in the Appendix) neither entails nor constitutes the algorithm&rsquo;s unfairness.<br /><br /><br />What next?<br /><strong>For an accessible explanation of the impossibility theorems</strong>, see <a href="https://phenomenalworld.org/analysis/impossible-to-be-fair" target="_blank">this piece on Phenomenal World by Cosmo Grant</a>&#8203;.<br /><strong>For the back-and-forth about COMPAS</strong>, see the <a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing" target="_blank">initial ProPublica report</a>,&nbsp; <span>&#65279;</span><a href="https://www.documentcloud.org/documents/2998391-ProPublica-Commentary-Final-070616.html" target="_blank">Northpointe's response</a><span>&#65279;</span>, and <span>&#65279;</span><a href="https://www.propublica.org/article/propublica-responds-to-companys-critique-of-machine-bias-story" target="_blank">Propublica's counter-response</a><span>&#65279;</span>.<br /><strong>For a more general discussion of the issues surrounding algorithmic fairness</strong>, see <a href="https://www.jstor.org/stable/pdf/24758720.pdf?seq=1" target="_blank">this article</a>.</div>  <div class="paragraph"><br /><br /><strong><u><font size="5">Appendix</font></u></strong><br />In this Appendix, I&rsquo;ll briefly mention some of the other main statistical criteria of fairness that have been considered in the literature:<br /><br /><strong>(4) Balance for the Positive Class:</strong> The average risk score assigned to those individuals who are actually positive is the same for each relevant group.<br /><strong>(5) Balance for the Negative Class:</strong> The average risk score assigned to those individuals who are actually negative is the same for each relevant group.<br /><strong>(6) Equal Positive Predictive Value: </strong>The percentage of individuals predicted to be positive who are actually positive is the same for each relevant group.<br /><strong>(7) Equal Negative Predictive Value: </strong>The percentage of individuals predicted to be negative who are actually negative is the same for each relevant group.<br /><strong>(8) Equal Ratios of False Positive Rate to False Negative Rate:</strong> The ratio of the false positive rate to the false negative rate is the same for each relevant group.<br /><strong>(9) Equal Overall Error Rates:</strong> The number of false positives and false negatives, divided by the number of individuals, is the same for each relevant group.<br /><br />The first two &ndash; (4) and (5) &ndash; can be seen as generalizations of (2) and (3) to the case of continuous risk scores. Indeed, <a href="https://arxiv.org/pdf/1709.02012.pdf">Pleiss et al.</a> refer to the measures involved in (4) and (5) as the &lsquo;generalized false negative rate&rsquo; and the &lsquo;generalized false positive rate,&rsquo; respectively. And, along with (1), it is these two criteria, rather than (2) and (3), that are the target of the aforementioned impossibility theorem from Kleinberg et al.<br /><br />The next two &ndash; (6) and (7) &ndash; can be seen as generalizations of (1) to the case of binary predictions. This is how Chouldechova conceives of them. Just as (1) is motivated by the thought that a given risk score should &lsquo;mean&rsquo; the same thing for each group, so (6) and (7) can be motivated by the thought that a given binary prediction should &lsquo;mean&rsquo; the same thing for each group. Chouldechova&rsquo;s impossibility result targets (6) rather than (1), showing that (2), (3), and (6) are not jointly satisfiable unless either base rates are equal or prediction is perfect.<br /><br />The final two &ndash; (8) and (9) &ndash; are also intuitive. For (8), it would seem that violating this criterion would mean that the relative importance of avoiding false positives versus avoiding false negatives was evaluated differently for the different groups. Finally, (9) Equal Overall Error Rates embodies the natural thought that fairness requires that the algorithm be equally accurate overall for each of the different groups.<br /><br />&#8203;We can now easily see that our perfectly fair predictive algorithm violates all these criteria as well (and hence also their expectational or probabilistic analogues), given the same assignment of coins to people and people to rooms as above:</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/screen-shot-2020-08-22-at-10-38-08-am_orig.png" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">The fact that the numbers in the Room A column and the Room B column differ for each row means that our fair predictive algorithm violated all of (4)-(9), in addition to (2) and (3). This suffices to show that none of (4)-(9), nor their expectational/probabilistic analogues, is a necessary condition on fairness. Among all these statistical criteria, only (1*) Expectational Calibration Within Groups is left standing as a plausible necessary condition on fairness.</div>]]></content:encoded></item><item><title><![CDATA[The Conjunction Fallacy? Take a Guess.]]></title><link><![CDATA[https://www.kevindorst.com/stranger_apologies/the-conjunction-fallacy-take-a-guess]]></link><comments><![CDATA[https://www.kevindorst.com/stranger_apologies/the-conjunction-fallacy-take-a-guess#comments]]></comments><pubDate>Sat, 18 Jul 2020 04:00:00 GMT</pubDate><category><![CDATA[All Most Read]]></category><category><![CDATA[Conjunction Fallacy]]></category><guid isPermaLink="false">https://www.kevindorst.com/stranger_apologies/the-conjunction-fallacy-take-a-guess</guid><description><![CDATA[(This post is co-written with Matt Mandelkern, based on our joint paper&nbsp;on the topic. 2500 words; 12 minute read.)It’s February 2024. Three Republicans are vying for the Presidential nomination, and FiveThirtyEight puts their chances at:Mike Pence: 44%Tucker Carlson: 39%Nikki Haley: 17%Suppose you trust these estimates.&nbsp; Who do you think will win?Some natural answers: "Pence"; "Pence or Carlson"; "Pence, Carlson, or Haley".&nbsp; In a Twitter poll earlier this week, the first two too [...] ]]></description><content:encoded><![CDATA[<div class="paragraph"><span style="color:rgb(127, 127, 127)">(This post is co-written with <a href="http://mandelkern.hosting.nyu.edu/"><span>Matt Mandelkern</span></a></span><font color="#7F7F7F">, based on our <a href="https://philpapers.org/rec/DORGG" target="_blank">joint paper</a></font><font color="#7F7F7F">&nbsp;on the topic. 2500 words; 12 minute read.)</font><br><br>It&rsquo;s February 2024. Three Republicans are vying for the Presidential nomination, and FiveThirtyEight puts their chances at:<ul style="color:rgb(0, 0, 0)"><li><span>Mike Pence: 44%</span></li><li>Tucker Carlson: 39%</li><li>Nikki Haley: 17%</li></ul>Suppose you trust these estimates.<span>&nbsp;</span> Who do you think will win?<br><br>Some natural answers: "Pence"; "Pence or Carlson"; "Pence, Carlson, or Haley".&nbsp; In a Twitter poll earlier this week, the first two took up a majority (53.4%) of responses:</div><div><div id="440346336875648766" align="left" style="width: 100%; overflow-y: hidden;" class="wcustomhtml"><blockquote class="twitter-tweet tw-align-center"><p lang="en" dir="ltr">Another X-Phi poll:<br><br>It&rsquo;s March 2024. Three Republicans are vying for the nomination. You trust the predictions of FiveThirtyEight, which puts their chances at:<br><br>Mike Pence: 44%<br>Tucker Carlson: 39%<br>Nikki Haley: 17%<br><br>Who do you think will win?</p>&mdash; Kevin Dorst (@kevin_dorst) <a href="https://twitter.com/kevin_dorst/status/1283303517734789124?ref_src=twsrc%5Etfw">July 15, 2020</a></blockquote><br></div></div><div class="paragraph">&#8203;But wait! If you answered "Pence", or "Pence or Carlson", did you commit the <a href="https://en.wikipedia.org/wiki/Conjunction_fallacy"><strong>conjunction fallacy</strong></a>? This is the tendency to say that narrower hypotheses are more likely than broader ones&ndash;&ndash;such as saying that <em>P&amp;Q</em> is more likely than <em>Q</em>&mdash;contrary to the laws of probability.<span>&nbsp;</span> Since every way in which "Pence"&nbsp;or "Pence or Carlson" could be true is <em>also</em>&nbsp;a way in which &ldquo;Pence, Carlson, or Haley&rdquo; would be true, the third option is guaranteed to be more likely than each of the first two.<br><br>Does this mean answering our question with &ldquo;Pence&rdquo; or &ldquo;Pence or Carlson&rdquo; was a mistake?<span>&nbsp;</span><br><br>We don&rsquo;t think so. We think what you were doing was <strong><em>guessing</em></strong>. Rather than simply ranking answers for probability, you were making a tradeoff between being <em>accurate</em> (saying something probable) and being <em>informative</em> (saying something specific). In light of this tradeoff, it&rsquo;s perfectly permissible to guess an answer (&ldquo;Pence&rdquo;) that&rsquo;s less probable&ndash;&ndash;but more informative&ndash;&ndash;than an alternative (&ldquo;Pence, Carlson, or Haley&rdquo;).<br><br>Here we'll argue that this explains&ndash;&ndash;and partially rationalizes&ndash;&ndash;the conjunction fallacy.</div><div><!--BLOG_SUMMARY_END--></div><div class="paragraph"><br><u><span><strong>1. Good Guesses</strong></span></u><br>We make guesses whenever someone poses a question and we can&rsquo;t be sure of the answer. &ldquo;Will it rain tomorrow?&rdquo;, &ldquo;I think it will&rdquo;; &ldquo;What day will the meeting be?&rdquo;, &ldquo;Probably Thursday or Friday&rdquo;; &ldquo;Who do you think will win the nomination?&rdquo;, &ldquo;I bet Pence will&rdquo;; and so on.<br><br>What sorts of guesses are <em>good</em> guesses? The <a href="https://philpapers.org/rec/DORGG" target="_blank">full paper</a>&nbsp;argues that there are a variety of robust and intricate patterns, drawing on a <a href="https://www.benholguin.comhttps://www.kevindorst.com/uploads/1/1/3/6/113613527/tgb_website.pdf">fascinating paper by Ben Holgu&iacute;n</a>.<span>&nbsp;</span> Here we&rsquo;ll just focus on the main patterns in our lead example.<br><br>Suppose you have the probability estimates from above (Pence, 44%; Carlson, 39%; Haley, 17%), and we ask you: &ldquo;Who do you think will win?&rdquo; As we've seen, a variety of answers seem reasonable:<br><br>(1) "Pence"<span>&nbsp; &nbsp;</span><span style="color:rgb(0, 0, 0)">&#10003;</span><br>(2) "Pence or Carlson"<span>&nbsp; &nbsp;</span><span style="color:rgb(0, 0, 0)">&#10003;</span><br>(3) "Pence, Carlson, or Haley"<span>&nbsp;</span> &nbsp;<span style="color:rgb(0, 0, 0)">&#10003;</span><br><br>In contrast, a variety of answers sound bizarre:<br><br>(4) "Carlson"&nbsp; &nbsp;<span style="color:rgb(32, 33, 34)">&#10008;</span><br>(5) "Carlson or Haley ( = "Not Pence") <span>&nbsp;</span> <span style="color:rgb(32, 33, 34)">&#10008;</span><br>(6) "Pence or Haley"&nbsp;<span>&nbsp;</span> <span style="color:rgb(32, 33, 34)">&#10008;</span><br><br>We&rsquo;ve run examples like this by dozens of people, and the judgments are robust&ndash;&ndash;for instance, in a similar Twitter poll in which "Pence or Haley" was an explicit option, it was the least-common answer (6.7%):</div><div><div id="405031075937227206" align="left" style="width: 100%; overflow-y: hidden;" class="wcustomhtml"><blockquote class="twitter-tweet tw-align-center"><p lang="en" dir="ltr">*X-Phi survey*<br><br>Suppose it's February 2024, and three Republicans are vying for the nomination. FiveThirtyEight puts their chances at:<br><br>- Mike Pence, 45%<br>- Tucker Carlson, 36%<br>- Nikki Haley, 19%<br><br>Suppose you trust these estimates. What do you think's likely to happen?</p>&mdash; Kevin Dorst (@kevin_dorst) <a href="https://twitter.com/kevin_dorst/status/1282661066422587394?ref_src=twsrc%5Etfw">July 13, 2020</a></blockquote><br></div></div><div class="paragraph"><span style="color:rgb(42, 42, 42)">What&rsquo;s going on?</span><span style="color:rgb(42, 42, 42)">&nbsp;&nbsp;</span><span style="color:rgb(42, 42, 42)">How do we explain why (1)&ndash;(3) are good guesses and (4)&ndash;(6) are bad ones?</span><br><br><span style="color:rgb(42, 42, 42)">Our basic idea is a&nbsp;</span><a href="http://library.mibckerala.org/lms_frame/eBook/Popular%20Phylosophy-%20the%20will%20to%20believe.pdf">Jamesian thought</a><span style="color:rgb(42, 42, 42)">: making good guesses involve trading off two competing goals.</span><span style="color:rgb(42, 42, 42)">&nbsp;&nbsp;</span><span style="color:rgb(42, 42, 42)">On the one hand, we want to avoid error&ndash;&ndash;to be accurate. On the other, we want to get at the truth&ndash;&ndash;to be informative.</span><span style="color:rgb(42, 42, 42)">&nbsp;&nbsp;</span><span style="color:rgb(42, 42, 42)">These two goals compete: the more informative your guess, the less likely it is to be true. <strong>A good guess is one that optimizes this tradeoff between accuracy and informativity.</strong></span><br><br><span style="color:rgb(42, 42, 42)">More precisely, we assume that guesses have&nbsp;</span><em style="color:rgb(42, 42, 42)">answer values</em><span style="color:rgb(42, 42, 42)">&nbsp;that vary with their accuracy and informativity. True guesses are better than false ones, and&nbsp;</span><em style="color:rgb(42, 42, 42)">informative</em><span style="color:rgb(42, 42, 42)">&nbsp;true guesses are better than&nbsp;</span><em style="color:rgb(42, 42, 42)">un</em><span style="color:rgb(42, 42, 42)">informative true ones.</span><span style="color:rgb(42, 42, 42)">&nbsp;&nbsp;</span><span style="color:rgb(42, 42, 42)">Given that, here&rsquo;s our proposal:</span><br><br><strong style="color:rgb(42, 42, 42)">Guessing as Maximizing:</strong><span style="color:rgb(42, 42, 42)">&nbsp;In guessing, we try to select an answer that has as much answer-value as possible&ndash;&ndash;we&nbsp;</span><em style="color:rgb(42, 42, 42)">maximize expected answer-value</em><span style="color:rgb(42, 42, 42)">.</span><br><br><span style="color:rgb(42, 42, 42)">To make this precise, we can clarify the notion of&nbsp;</span><em style="color:rgb(42, 42, 42)">informativity</em><span style="color:rgb(42, 42, 42)">&nbsp;using a&nbsp;</span><a href="https://semprag.org/index.php/sp/article/view/sp.5.6/0">standard model of a question</a><span style="color:rgb(42, 42, 42)">. Questions can be thought of as partitioning the space of open possibilities into a set of complete answers. For example: the set of complete answers to &ldquo;Will it rain tomorrow?&rdquo; is {</span><em style="color:rgb(42, 42, 42)">it will rain, it won&rsquo;t rain</em><span style="color:rgb(42, 42, 42)">}; the set of complete answers to &ldquo;Who will win the nomination?&rdquo; is {</span><em style="color:rgb(42, 42, 42)">Pence will win, Carlson will win, Haley will win</em><span style="color:rgb(42, 42, 42)">}; and so on.</span><br><br><span style="color:rgb(42, 42, 42)">In response to a question, a guess is informative to the extent that it rules out alternative answers.</span><span style="color:rgb(42, 42, 42)">&nbsp;</span><span style="color:rgb(42, 42, 42)">Thus &ldquo;Pence&rdquo; is more informative than &ldquo;Pence or Carlson&rdquo;, which in turn is more informative than &ldquo;Pence, Carlson, or Haley&rdquo;.</span><br><br><span style="color:rgb(42, 42, 42)">Given this, we can explain why (4)&ndash;(6) are bad guesses. Consider &ldquo;Carlson&rdquo;. It's exactly as informative as &ldquo;Pence&ldquo;&ndash;&ndash;both rule out 2 of the 3 possible complete answers&ndash;&ndash;but it is&nbsp;</span><em style="color:rgb(42, 42, 42)">less probable:&nbsp;</em><span style="color:rgb(42, 42, 42)">&ldquo;Carlson&ldquo; has a 39% chance of being true, while &ldquo;Pence&rdquo; has a 44% chance.</span><span style="color:rgb(42, 42, 42)">&nbsp;</span><span style="color:rgb(42, 42, 42)">Thus if you&rsquo;re trying to maximize expected answer value, you should never guess &ldquo;Carlson&rdquo;, since &ldquo;Pence&rdquo; is equally informative but more likely to be accurate.</span><br><br><span style="color:rgb(42, 42, 42)">Similarly, consider &ldquo;Pence or Haley&rdquo;. What&rsquo;s odd about this guess is that it &ldquo;skips&rdquo; over Carlson. In particular, if we swap &ldquo;Haley&rdquo; for "Carlson&rdquo;, we get a different guess that's equally informative but, again, more probable. (&ldquo;Pence or Carlson&rdquo; is 44 + 39 = 83% likely to be true, while &ldquo;Pence or Haley&rdquo; is 44 + 17 = 61% likely.)</span><br><br><span style="color:rgb(42, 42, 42)">On the other hand, the Guessing as Maximizing account explains why (1)&ndash;(3) can all be&nbsp;</span><em style="color:rgb(42, 42, 42)">good</em><span style="color:rgb(42, 42, 42)">&nbsp;guesses. The basic point: if you really care about being informative, you should choose a maximally specific answer (&ldquo;Pence&rdquo;); if you really care about being accurate, you should choose a maximally likely answer (&ldquo;Pence, Carlson, or Haley&rdquo;); and intermediate ways of weighting these constraints lead to good guesses at intermediate levels of informativity (&ldquo;Pence or Carlson&rdquo;).</span><br><br>(For a formal exposition of all this, see the Appendix or <a href="https://philpapers.org/rec/DORGG" target="_blank">the paper</a>.)</div><div class="paragraph" style="text-align:right;"><span style="color:rgb(127, 127, 127)">(1400 words left)</span></div><div class="paragraph"><u><span><strong>3. The conjunction fallacy</strong></span></u><br>With this account of guessing in hand, let&rsquo;s apply it to our opening observation: guessing leads to the conjunction fallacy.<br><br>Recall: this is the tendency to rate narrower hypotheses (like &ldquo;P&amp;Q&rdquo;) as more probable than broader ones (like &ldquo;Q&rdquo;). It&rsquo;s the star of the show in the <a href="https://blackwells.co.uk/bookshop/product/9781412959032?gC=5a105e8b&amp;gclid=Cj0KCQjwgJv4BRCrARIsAB17JI48AcrOaaDXJN11K31zObh6XpkUlr_pVV2LETD4tg134KbxBkqmlZAaAgkPEALw_wcB">common argument</a> that people&rsquo;s reasoning is systematically irrational, lacking an understanding of the&nbsp;<a href="https://www.amazon.co.uk/s?k=superforecasting+philip+tetlock&amp;adgrpid=51354631097&amp;gclid=Cj0KCQjwgJv4BRCrARIsAB17JI6lntHKwv7GLitSD4pS1QJIX0bi8LqNqctE-vTJewh3kumtqFOZh-saAu5lEALw_wcB&amp;hvadid=259090618310&amp;hvdev=c&amp;hvlocphy=1006976&amp;hvnetw=g&amp;hvqmt=b&amp;hvrand=12232881106983861359&amp;hvtargid=kwd-315087413013&amp;hydadcr=24459_1816151&amp;tag=googhydr-21&amp;ref=pd_sl_72nuqj67q2_b">basic rules of probability</a> and <a href="https://www.amazon.co.uk/Thinking-Fast-Slow-Daniel-Kahneman/dp/0141033576/ref=sr_1_1?dchild=1&amp;keywords=thinking+fast+and+slow&amp;qid=1594291304&amp;sr=8-1">instead using simple heuristics</a>.<br><br>The most famous example is from the original <a href="https://psycnet.apa.org/record/1984-03110-001">paper by Tversky and Kahneman</a>:</div><blockquote><span>Linda is 31 years old,</span> <span>single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.&nbsp;Which of the following is more likely?<br><br>&#8203;a)<span>&nbsp;</span>Linda is a bank teller and is active in the feminist movement.<br>b) Linda is a bank teller.</span></blockquote><div class="paragraph">A majority of subjects said &ldquo;feminist bank teller&rdquo; (FT) was more probable than &ldquo;bank teller&rdquo; (T). Again, this violates the laws of probability: every possibility in which Linda is a feminist bank teller is <em>also</em> one in which she&rsquo;s a bank teller&ndash;&ndash;but not vice versa.<br><br>What&rsquo;s going on here? Our proposal:<br><br><strong>Answer-Value Account:</strong> <span>People commit the conjunction fallacy because they rank answers accordin</span><span>g to their<em>&nbsp;</em></span><em>quality as guesses</em> <span>(their expected answer-value), rather than their</span> <em>probability of being true</em><span>.</span><br><br>&#8203;In other words, we think the Linda case is analogous to the following one:</div><blockquote><span>It&rsquo;s 44% likely that Pence will win, 39% likely that Carlson will, and 17% likely that Haley will. Which of the following are you more inclined to guess?<br><br>a) Pence or Carlson will win.<br>&#8203;b) Pence, Carlson, or Haley will win.</span><br><span></span></blockquote><div class="paragraph">Given the above survey, we can expect that around half of people would choose (a).&nbsp; (Since 35/(35 + 38.2) = 47.8%.) The crucial point is that in both the Linda and Pence cases, option (a) is less probable <em>but more informative</em> than option (b) with respect to the salient question&ndash;&ndash;e.g. &ldquo;What are Linda&rsquo;s social and occupational roles?&rdquo; or &ldquo;Who will win the nomination?&rdquo;<br><br>In particular, our model of expected answer value predicts that you should rate&nbsp; &ldquo;feminist bank teller&rdquo; as a better guess than &ldquo;bank teller&rdquo; whenever you&rsquo;re sufficiently confident that Linda is a feminist <em>given</em> that she&rsquo;s a bank teller&ndash;&ndash;whenever P(F|T) is sufficiently high, where the threshold for &ldquo;sufficient&rdquo; is determined by how much you value being informative <span style="color:rgb(42, 42, 42)">(see the Appendix)</span>.<br><br>Why is this conditional probability P(F|T) what matters? Because although the probability of &ldquo;feminist bank teller&rdquo; is always less than that of &ldquo;bank teller&rdquo;, <em>how much</em> less is determined by this conditional probability, since&nbsp; P(FT) = P(T)&bull;P(F|T). Thus w<span>hen P(F|T) is h</span><span>igh, switching from "bank teller" to "feminist bank teller" has only a small cost to accuracy&ndash;&ndash;which is easily outweighed by the gain in informativity. Our account therefore makes the following prediction:<br><br><strong>Prediction:</strong>&nbsp;Rates of ranking the conjunction AB as more probable than the conjunct B will tend to scale with P(A|B).</span><br><br>This prediction is borne out by the data; a clean example comes from <a href="https://link.springer.com/article/10.1007/s11229-009-9701-y">Tentori and Crupi (2012)</a>. They give a vignette in which they introduce Mark, and then say that he holds&nbsp;<em>X</em> tickets in a 100-ticket raffle&ndash;&ndash;where&nbsp;<em>X</em> varies between 0 and 100 across different experimental conditions.<span>&nbsp;</span> They then ask subjects which of the following is more likely:</div><blockquote>a)&nbsp;<span style="color:rgb(0, 0, 0)">Mark is a scientist and will win the lottery.<br>&#8203;b)&nbsp;Mark is a scientist.</span><br></blockquote><div class="paragraph">The rates of ranking &ldquo;scientist and will win the lottery&rdquo; (WS) as more likely than (or equally likely as)<span>&nbsp;</span> &ldquo;scientist&rdquo; (S) scaled directly with the number of tickets Mark held, i.e. with the probability that Mark wins the lottery given that he&rsquo;s a scientist, P(W|S) (which equals P(W), since S and W are independent):</div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/20-7-15-p-w-s-graph_orig.jpg" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%"></div></div></div><div class="paragraph"><span style="color:rgb(42, 42, 42)">This is exactly what the answer-value account predicts.</span></div><div class="paragraph" style="text-align:right;"><font color="#818181">(800 words left)</font></div><div class="paragraph"><u><span><strong>&#8203;4. So is the conjunction fallacy irrational?</strong></span></u><br>Suppose the answer-value account is right: people commit the conjunction fallacy when they rate answers for their quality as guesses rather than for their probability of being true. What would that imply about the conjunction fallacy, and its role in the debate about human rationality more broadly?<br><br>The answer is subtle.<br><br>One the one hand, it turns out that questions like &ldquo;What do you think is (most) likely?&rdquo; are standard ways of eliciting guesses&ndash;&ndash;which in turn have a very different normative profile than probability judgments.<span>&nbsp;</span>For example, when we asked &ldquo;Who do you think will win?&rdquo; in our opening question, answering &ldquo;Pence&rdquo; is not irrational&ndash;&ndash;nor would it be so if we tweaked the wording to &ldquo;What&rsquo;s most likely?&rdquo;; &ldquo;What do you bet will happen?&rdquo;; etc. These are all ways of eliciting guesses. (Note: our second Twitter poll used "What do you think's likely to happen?")<br><br>Since these prompts are standard ways of eliciting answers in conjunction-fallacy experiments, this complicates our assessment of such answers. The pragmatic upshot of the question that&rsquo;s being asked is not a literal question about probability: people will hear these questions are requests to <em>guess</em>&mdash;to trade off probability against informativity&ndash;&ndash;rather than to merely assess probability.<span>&nbsp;</span> And reasonably so.<br><br>On the other hand, using such guesses to guide your statements and actions can lead to mistakes. This is clearest in experiments that elicit the conjunction fallacy while asking people to <em>bet</em> on options. Of course, &ldquo;What do you bet will happen?&rdquo; is a natural way of eliciting a guess in conversation (&ldquo;I bet Pence&rsquo;ll win&rdquo;). Nevertheless, if we actually give you money and you choose to let it ride on &ldquo;Pence or Carlson&rdquo; rather than &ldquo;Pence, Carlson, or Haley&rdquo;, then you&rsquo;ve made a mistake.&nbsp;Moreover, experiments show that people <a href="https://link.springer.com/article/10.3758/BF03195280">do have a tendency to bet like this</a>&ndash;&ndash;though the <a href="https://link.springer.com/article/10.1007/s11229-008-9377-8">rates of the fallacy diminish somewhat</a>.<br><br>In scenarios like this, the conjunction fallacy is clearly a mistake. <strong>The c</strong><strong>rucial question:</strong> What does this mistake reveal about human reasoning?<br><br>If our account is right, it does <em>not</em> reveal&ndash;&ndash;<a href="https://www.amazon.co.uk/Superforecasting-Science-Prediction-Philip-Tetlock/dp/1847947158/ref=sr_1_1?adgrpid=51354631097&amp;dchild=1&amp;gclid=Cj0KCQjwgJv4BRCrARIsAB17JI6njfStEWGuBAU3z94VxXXjrRnJyxxnxhs7OGpZHpXghsZsXcdzcbMaAhaSEALw_wcB&amp;hvadid=259056108174&amp;hvdev=c&amp;hvlocphy=1006976&amp;hvnetw=g&amp;hvqmt=b&amp;hvrand=14547295071324018319&amp;hvtargid=kwd-315087413013&amp;hydadcr=24459_1816150&amp;keywords=superforecasting+philip+tetlock&amp;qid=1594289291&amp;sr=8-1&amp;tag=googhydr-21">as</a> <a href="https://www.amazon.co.uk/Misbehaving-Making-Behavioral-Economics/dp/B00VR89UTG/ref=sr_1_1?crid=264JDQ7K4F3RV&amp;dchild=1&amp;keywords=the+making+of+behavioral+economics&amp;qid=1594289533&amp;s=books&amp;sprefix=the+making+of+behav%2Cstripbooks%2C396&amp;sr=1-1" target="_blank">is&nbsp;commonly</a>&nbsp;<a href="https://www.amazon.co.uk/Rational-Choice-Uncertain-World-Psychology/dp/1412959039/ref=sxts_sxwds-bia-wc-p13n1_0?cv_ct_cx=rational+choice+in+an+uncertain+world&amp;dchild=1&amp;keywords=rational+choice+in+an+uncertain+world&amp;pd_rd_i=1412959039&amp;pd_rd_r=52e83148-4b04-445d-9494-0d4d76eeb579&amp;pd_rd_w=vPv8J&amp;pd_rd_wg=EedsH&amp;pf_rd_p=3b853e83-6bf6-4856-96c6-ae2430a26fcf&amp;pf_rd_r=9AWH91A0PQSFXQMACY1F&amp;psc=1&amp;qid=1594289517&amp;sr=1-1-c8b680f7-0dc9-4abe-aa5a-ccfde0ac07ae">claimed</a>&ndash;&ndash;that human judgment works in a non-probabilistic way.<span>&nbsp;</span> After all, what&rsquo;s happening is that people are <em>guessing</em> and then acting based on that guess&ndash;&ndash;and guessing <em>requires</em> an (implicit) assessment of probability. Instead, the conjunction fallacy reveals that people are bad at <em>pulling apart</em> judgments about pure probability from a much more common type of judgment&ndash;&ndash;the quality of a guess.<br><br>Why are people bad at this?<span>&nbsp;</span> Our proposal: because guessing is something we do all the time. Moreover, it&rsquo;s something that <em>makes sense</em> to do all the time.<span>&nbsp;</span>We can&rsquo;t have degrees of belief about all possibly-relevant claims&ndash;&ndash;no system could, since <a href="https://www.sciencedirect.com/science/article/abs/pii/000437029390036B">general probabilistic inference is intractable</a>. So instead, we construct probability judgments about the small set of claims generated by the question under discussion, use them to formulate a guess, and then <em>reason within that guess</em>.<br><br>There&rsquo;s empirical evidence that people do this. For example: poker players decide what to bet by <a href="https://www.toppokersites.com/strategy/hand-ranges/">guessing what hands their opponents might have</a>; doctors decide what tests to perform by <a href="https://journals.sagepub.com/doi/abs/10.1177/0272989X18786358">guessing their patients' ailments</a>; scientists decide what experiments to run by <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.569.3425&amp;rep=rep1&amp;type=pdf">guessing which hypothesis is true</a>; and so on.<br><br>People do this, we think, because probability alone doesn&rsquo;t get you very far.<span>&nbsp;</span> The most probable answer is always, &ldquo;Something will happen, somewhere, sometime.&rdquo; Such certainly-true answers don&rsquo;t help guide our actions&ndash;&ndash;instead, we need to trade off such certainty for some amount of informativity.<br><br>If this is right, the error revealed by the conjunction fallacy is in some ways like that revealed by the <strong>Stroop test</strong>. Watch the following video and try&ndash;&ndash;as quickly as possible&ndash;&ndash;to say aloud the <span><em>color of the text</em></span> presented (do <em>not</em> read the word):</div><div><div id="750079665511339782" align="left" style="width: 100%; overflow-y: hidden;" class="wcustomhtml"><div style="text-align: center;"><iframe width="560" height="315" src="https://www.youtube.com/embed/E92GSwr46DY" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe></div></div><div class="paragraph"><br>It&rsquo;s hard!<span>&nbsp;</span> And the reason it&rsquo;s hard is that it requires doing something that you don&rsquo;t normally do: assess the color of a word <em>without reading it</em>.<span>&nbsp;</span> Yet throughout most of life, what you do when presented with a word&ndash;&ndash;what makes sense to do&ndash;&ndash;is read it. In short: a disposition that involves sophisticated processing, and is rational in general, can lead to errors in certain cases.<br><br>Likewise, it&rsquo;s hard not to commit the conjunction fallacy because that requires doing something that you don&rsquo;t normally do: assess the probability of an uncertain claim <em>without assessing it as a guess</em>. Yet throughout most of life, what you do when presented with such a claim&ndash;&ndash;what makes sense to do&ndash;&ndash;is assess its quality as a guess. In short: a disposition that involves sophisticated processing, and is rational in general, can lead to errors in certain cases.<br><br>Upshot: although the conjunction fallacy is sometimes a mistake, it is <em>not</em> a mistake that reveals deep-seated irrationality. Instead, it reveals that when forming judgments under uncertainty, we need to trade off accuracy for informativity&ndash;&ndash;we need to guess.<br><br><br>What next?<br><strong>If you enjoyed this post,</strong>&nbsp;please consider <a href="https://twitter.com/kevin_dorst/status/1284483143366475780" target="_blank">retweeting it</a>, following us on Twitter (<a href="https://twitter.com/kevin_dorst" target="_blank">Kevin</a>,&nbsp;<a href="https://twitter.com/MMandelkern" target="_blank">Matt</a>), or <a href="https://mailchi.mp/279517050568/stranger_apologies_signup" target="_blank">signing up for the newsletter</a>. Thanks!<br><strong>If you&rsquo;re interested in the details,</strong> including other potential applications of guessing to epistemology, philosophy of language, and cognitive science, check out the&nbsp;<a href="https://philpapers.org/rec/DORGG" target="_blank">full paper</a>.<br><strong>If you want to learn more about guessing</strong>, also check out this <a href="https://www.benholguin.comhttps://www.kevindorst.com/uploads/1/1/3/6/113613527/tgb_website.pdf">paper by Ben Holgu&iacute;n</a>, <a href="https://www.oxfordscholarship.com/view/10.1093/oso/9780198833314.001.0001/oso-9780198833314-chapter-4">this one by Sophie Horowitz</a>, or <a href="https://www.oxfordscholarship.com/view/10.1093/oso/9780198833314.001.0001/oso-9780198833314-chapter-4">this classic by Kahneman and Tversky</a>.<br><strong>If you want to learn more about the conjunction fallacy,</strong> <a href="http://piotr-evdokimov.com/linda.pdf">Tversky and Kahneman&rsquo;s<span>&nbsp;</span> original paper</a> is fantastic, as is this <a href="http://fitelson.org/tcr_2013.pdf">2013 paper by Tentori et al</a>.&ndash;&ndash;which provides a good overview as well as its own interesting proposal and data.</div><div class="paragraph"><br><u><span><strong><font size="5">Appendix</font></strong></span></u><br>Here we&rsquo;ll state some of the core ideas a bit more formally; see the <a href="https://philpapers.org/rec/DORGG" target="_blank">full paper</a>&nbsp;for the details.<br><br><strong>How can we generalize our observations about good and bad guesses?</strong><br><br>Recall that we can model a question as the set of it&rsquo;s complete answers: &ldquo;Who will win the nomination?&rdquo; corresponds to {<em>Pence will win, Carlson will win, Haley will win</em>}. Our two most important observations about good guesses come from <a href="https://www.benholguin.comhttps://www.kevindorst.com/uploads/1/1/3/6/113613527/tgb_website.pdf">Holgu&iacute;n&rsquo;s paper</a>:<br><br><strong style="color:rgb(42, 42, 42)">Filtering:</strong> <span style="color:rgb(42, 42, 42)">A guess is permissible only if it is</span> <em style="color:rgb(42, 42, 42)">filtered</em><span style="color:rgb(42, 42, 42)">: if it includes a com</span><span style="color:rgb(42, 42, 42)">plete answer</span> <em style="color:rgb(42, 42, 42)">q</em><span style="color:rgb(42, 42, 42)">, it must include all complete answers that are more probable than</span> <em style="color:rgb(42, 42, 42)">q</em><span style="color:rgb(42, 42, 42)">.</span><br><br>This explains the answers (4)&ndash;(6) that sound odd. &ldquo;Carlson&rdquo; (of course) includes &ldquo;Carlson&rdquo; but omits the more probable &ldquo;Pence&rdquo;.&nbsp; &ldquo;Carlson or Haley&rdquo; does likewise. Meanwhile, &ldquo;Pence or Haley&rdquo; include &ldquo;Haley&rdquo; but excludes the more probable &ldquo;Carlson&rdquo;.<br>In contrast, the answers that sound natural&ndash;&ndash;&ldquo;Pence&rdquo;, &ldquo;Pence or Carlson&rdquo;, and &ldquo;Pence, Carlson, or Haley&rdquo;&ndash;&ndash;are all filtered.<br><br>The second observation Holgu&iacute;n makes is that <em>any</em> filtered guess is permissible:<br><br><strong style="color:rgb(42, 42, 42)">Optionality:</strong> <span style="color:rgb(42, 42, 42)">There is a permissible (filtered) guess that includes any number of complete answers.</span><br><br><span style="color:rgb(0, 0, 0)">In other words, it&rsquo;s permissible for your guess to include 1 complete answer (&ldquo;Pence&rdquo;), 2 complete answers (&ldquo;Pence or Carlson&rdquo;), or all three (&ldquo;Pence, Carlson, or Haley&rdquo;).</span><br><br><strong>How does our model of the accuracy-informativity tradeoff explain these constraints?</strong><br><br><span style="color:rgb(0, 0, 0)">There are two steps.</span><span style="color:rgb(0, 0, 0)">&nbsp;</span> <span style="color:rgb(0, 0, 0)">First, true guesses are always better than false guesses, so we&rsquo;ll assign false guesses an answer-value of 0, while true guesses always get some positive value. That positive value is determined by the (true) guess&rsquo;s informativity, as well as how much you value informativity (in the given context).</span><br><br><span style="color:rgb(0, 0, 0)">Precisely, let the</span> <strong style="color:rgb(0, 0, 0)">informativity</strong> <span style="color:rgb(0, 0, 0)">of $p$, $Q_p$, be the proportion of the complete answers to $Q$ that $p$ rules out.</span><span style="color:rgb(0, 0, 0)">&nbsp;</span> <span style="color:rgb(0, 0, 0)">Thus relative to the question &ldquo;Who will win the nomination?&rdquo;, &ldquo;Pence&rdquo; has informativity</span> <span style="color:rgb(0, 0, 0)">&#8532;</span><span style="color:rgb(0, 0, 0)">, &ldquo;Pence or Carlson&rdquo; has informativity</span> <span style="color:rgb(0, 0, 0)">&#8531;</span><span style="color:rgb(0, 0, 0)">, and &ldquo;Pence, Carlson, or Haley&rdquo; has informativity 0. Meanwhile, let $J \ge 1$ be a parameter that captures the</span> <em style="color:rgb(0, 0, 0)">J</em><span style="color:rgb(0, 0, 0)">amesian value of informativity. If $p$</span><span style="color:rgb(0, 0, 0)">&nbsp;is true, it&rsquo;s answer-value is $J$</span><span style="color:rgb(0, 0, 0)">&nbsp;raised to the power of $p$</span><span style="color:rgb(0, 0, 0)">&rsquo;s informativity: $J^{Q_p}$.</span><br><br><span style="color:rgb(0, 0, 0)">Thus the</span> <strong style="color:rgb(0, 0, 0)">expected answer-value</strong><span style="color:rgb(0, 0, 0)">, given question $Q$, value-of-informativity $J$, and probabilities $P$, is:</span></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/screen-shot-2020-07-16-at-8-27-08-am.png?1594884463" alt="Picture" style="width:464;max-width:100%"></a><div style="display:block;font-size:90%"></div></div></div><div class="paragraph">This formula reveals the accuracy-informativity tradeoff.<span>&nbsp;</span> It&rsquo;s easy to make the first term (i.e. $P(p)$) large by choosing a trivial truth (&ldquo;Pence, Carlson, or Haley&rdquo;), but this makes the second term (i.e. $J^{Q_p}$) small. Conversely, it&rsquo;s easy to make the second term large by choosing a specific guess (&ldquo;Pence&rdquo;), but this makes the first term small. A good guess is one that optimizes this tradeoff between saying something accurate and saying something informative, given your probabilities $P$ and value-of-informativity $J$.<br><br><strong>&#8203;Our proposal:</strong>&nbsp;It&rsquo;s permissible for $p$ to be your guess about $Q$ iff, for some value of $J\ge 1$, $p$ maximizes this quantity $E^J_Q(p)$.<br><br>This explains both Filtering and Optionality.<span>&nbsp;</span>Filtering is simple. If you choose a non-filtered guess (like &ldquo;Pence or Haley&rdquo;), it includes a complete answer that is less probable than an alternative that it excludes (&ldquo;Haley&rdquo; is less probable than &ldquo;Carlson&rdquo;).<span>&nbsp;</span> Thus by swapping out the latter for the former, we obtain a new guess (&ldquo;Pence or Carlson&rdquo;) that is equally informative but more probable&ndash;&ndash;and, therefore, has higher expected answer-value.<br><br>Optionality takes a bit more work, but the basic idea is simple.<span>&nbsp;</span> When $J$ has the minimal value of 1, being informative carries no extra value ($1^{Q_p} = 1$, no matter what $Q_p$ is)&ndash;&ndash;so the best option is to say the filtered guess that you&rsquo;re certain of (&ldquo;Pence, Carlson, or Haley&rdquo;). But as J grows, informativity steadily matters more and more&ndash;&ndash;meaning that more specific guesses get steadily higher expected answer-values. In our example: when $J &lt; 1.75$, &ldquo;Pence, Carlson, or Haley&rdquo; is best; when $6.71 &gt; J &gt; 1.75$, &ldquo;Pence or Carlson&rdquo; is best; and when $J &gt; 6.71$, &ldquo;Pence&rdquo; is best:</div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/20-7-16-eav-graphs_orig.jpg" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%"></div></div></div><div class="paragraph">Thus by varying the value or informativity, the accuracy-informativity tradeoff can lead you to guess different filtered answers.<br><br><strong>How do we derive our predictions about the conjunction fallacy from this model?</strong><br><br>We can illustrate this precisely with a simple example (which generalizes). Suppose the question under discussion is the result of crossing &ldquo;Is Linda a feminist?&rdquo; ($F$ or $\overline{F}$?) with &ldquo;Is she a bank teller?&rdquo; ($T$ or $\overline{T}$?), so the possible complete answers are:</div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/editor/screen-shot-2020-07-16-at-8-29-50-am.png?1594884617" alt="Picture" style="width:307;max-width:100%"></a><div style="display:block;font-size:90%"></div></div></div><div class="paragraph">Our model says that you should rank &ldquo;feminist bank teller&rdquo; as a better guess than &ldquo;bank teller&rdquo; iff it has higher expected answer-value. Since the informativity of "feminist bank teller" is &frac34; (it rules out &frac34; of the cells of the partition) and the informativity of "bank teller" is &frac12; (it rules out &frac12; of the cells), our above formula implies that the expected answer-value of the former is higher than that of the latter iff:<br><span></span></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/screen-shot-2020-07-16-at-8-34-52-am.png?1594884933" alt="Picture" style="width:319;max-width:100%"></a><div style="display:block;font-size:90%"></div></div></div><div class="paragraph">That is: you should guess that Linda is a &ldquo;feminist bank teller&rdquo; over &ldquo;bank teller&rdquo; whenever you are sufficiently confident that Linda is a feminist <em>given</em> that she&rsquo;s a bank teller&ndash;&ndash;where the threshold for &ldquo;sufficient" is determined by the value of informativity, J.<br><br>&#8203;For example, if you&rsquo;re 80% confident she&rsquo;s a feminist, independently of whether she&rsquo;s a bank teller, then this condition is met iff $P(F|T) = P(F) = 0.8 &gt; \frac{1}{J^{&frac14;}}$, iff $J&gt;2.44$.<span>&nbsp;</span>&nbsp;(Compare: in our original example, you should guess &ldquo;Pence&rdquo; iff $J &gt; 6.71$.) Thus we expect the conjunction fallacy to be common in the Linda scenario so long as they are sufficiently confident that Linda is a feminist, given that she&rsquo;s a bank teller&ndash;&ndash;as seems reasonable, given the vignette.</div></div>]]></content:encoded></item><item><title><![CDATA[Thoughts on Thoughts on Rationalization (Guest Post)]]></title><link><![CDATA[https://www.kevindorst.com/stranger_apologies/thoughts-on-thoughts-on-rationalization-guest-post]]></link><comments><![CDATA[https://www.kevindorst.com/stranger_apologies/thoughts-on-thoughts-on-rationalization-guest-post#comments]]></comments><pubDate>Sat, 04 Jul 2020 10:18:35 GMT</pubDate><category><![CDATA[Rationalization]]></category><guid isPermaLink="false">https://www.kevindorst.com/stranger_apologies/thoughts-on-thoughts-on-rationalization-guest-post</guid><description><![CDATA[(This is a guest post by Jake Quilty-Dunn, replying to my reply to his original post on on rationalization and (ir)rationality.&nbsp;1000 words; 5 minute read.)  Thanks very much to Kevin for inviting me to defend my comparatively gloomy picture of human nature on this blog, and for continuing the conversation with his thoughtful reply.      Kevin wonders whether negative affect might signal not cognitive dissonance per se, but rather an impetus to do some high-effort problem solving. This is an [...] ]]></description><content:encoded><![CDATA[<div class="paragraph"><font color="#818181">(This is a guest post by <a href="https://sites.google.com/site/jakequiltydunn/home" target="_blank">Jake Quilty-Dunn</a>, replying to <a href="https://www.kevindorst.com/stranger_apologies/thoughts-on-rationalization" target="_blank">my reply</a> to <a href="https://www.kevindorst.com/stranger_apologies/rationalization-and-irrationality-guest-post" target="_blank">his original post</a> on on rationalization and (ir)rationality.&nbsp;1000 words; 5 minute read.)</font></div>  <div class="paragraph"><span><span style="color:rgb(0, 0, 0)">Thanks very much to Kevin for inviting me to defend my comparatively gloomy picture of human nature on this blog, and for continuing the conversation with his thoughtful reply.</span></span><br /><span></span></div>  <div>  <!--BLOG_SUMMARY_END--></div>  <div class="paragraph"><span>Kevin wonders whether negative affect might signal not cognitive dissonance per se, but rather an impetus to do some high-effort problem solving. This is an interesting&mdash;and empirical&mdash;question.</span><br /><br /><span>Fortunately, there is some relevant evidence. Negative affect correlates with <a href="https://www.frontiersin.org/articles/10.3389/fpsyg.2017.01346/full">worse performance on problem solving</a>. It also <a href="https://www.sciencedirect.com/science/article/pii/S0959475206001150">hampers learning</a> how to solve puzzles and transferring that learning to novel cases. And there&rsquo;s some evidence that negative affect specifically makes people more likely to engage in <a href="https://psycnet.apa.org/record/1990-09907-001">low-effort as opposed to high-effort</a> problem-solving strategies.</span><br /><br /><span>I pause to note that there is disagreement about this issue, and the evidence is nuanced in a way that makes it hard to capture in this short blog post. <a href="https://psycnet.apa.org/record/1991-98595-002">Schwarz and colleagues</a> argued that negative affect enhances detail-oriented vs. heuristic-based cognition. A <a href="https://www.sciencedirect.com/science/article/abs/pii/S1057740801703129">later review</a> challenged this hypothesis, as does the more recent evidence just cited, though <a href="https://psycnet.apa.org/record/2005-16473-009">other results</a> suggest negative affect can indeed re-orient problem-solving strategies toward seeking additional information. Given the evidence for impaired problem-solving performance overall, however, there is basis for skepticism about the idea that dissonance effects could be explained by a more general phenomenon of negative affect spurring on problem solving.</span><br /><br /><span>Interestingly, a fairly recent study provides evidence that <a href="https://www.sciencedirect.com/science/article/pii/S0959475212000357">confusion can aid learning</a>, but the experimenters also found that self-reports of confusion failed to correlate with confusion induction. The <a href="https://psycnet.apa.org/record/1995-05331-001">negative affect of dissonance</a>, however, is typically reportable (even if participants are unaware of its source). Thus the sort of confused feelings Kevin suggests drive effortful problem-solving differ from dissonance in key ways.</span><br /><br /><span>Some theories posit confusion-like feelings as a spur to further cognition, but don&rsquo;t assign them a negative valence. Gopnik uses the provocative (if slightly cringey) <a href="https://link.springer.com/article/10.1023/A:1008290415597">metaphor of orgasm</a> to describe the feeling of successfully explaining some phenomenon. Even if Gopnik&rsquo;s hypothesized &ldquo;theory drive&rdquo; may provoke negative feelings when frustrated by a particularly confusing problem, it doesn&rsquo;t seem to come prepackaged with negative valence as dissonance does. Similarly, Fingerhut and Prinz argue that the <a href="https://www.sciencedirect.com/science/article/pii/S0079612318300049">&ldquo;cognitive perplexity&rdquo;</a> that motivates exploratory behavior is a key component of the (typically positive) feeling of wonder underlying aesthetic experience. These cognitive emotions don&rsquo;t seem to have the affective profile of dissonance.</span><br /><br /><span>Another relevant point concerns the factors that mediate the generation of negative affect in dissonance experiments. As I argued in my original post, these seem to include self-esteem in a way that would be hard to explain if the affect was a general problem-solving-motivator. This brings us to Kevin&rsquo;s next point: perhaps dissonance effects can arise in third-person as well as first-person cases.</span><br /><br /><span>Fortunately, again, there is relevant evidence. One of the interesting turns in 21<span>st</span>-century dissonance research concerns so-called &ldquo;vicarious dissonance&rdquo;. People can indeed experience dissonance when viewing others engage in counter-attitudinal behavior&ndash;&ndash;but only when subjects share a strong group identity with the people they&rsquo;re observing. Students who strongly identify with their college are <a href="https://content.apa.org/record/2003-05568-006">more likely to rationalize</a> and shift their own attitudes when observing somebody from their own college freely engage in counter-attitudinal behavior, but not when observing somebody from another college do the same thing. These effects aren&rsquo;t driven by <a href="https://journals.sagepub.com/doi/abs/10.1177/1368430204046108">empathy</a> or <a href="https://psycnet.apa.org/record/2015-55973-003">perspective-taking</a>, but instead rely on the self-concept of the observer.</span><br /><br /><span>Effects of this sort are also increased when subjects see themselves as similar to the in-group member. In one study, subjects heard similar in-group members preach the values of sunscreen and later mention that they hypocritically fail to use sunscreen regularly; the <a href="https://www.sciencedirect.com/science/article/pii/S0022103115001249">vicarious dissonance caused by this hypocrisy</a> led subjects to be more likely to take some sunscreen a moment later if the observed similar in-group member freely chose to make the hypocritical statement. When subjects are told that the lights and ventilation system may cause discomfort&mdash;similar to the placebo pill in <a href="https://psycnet.apa.org/record/1974-32359-001">Zanna and Cooper&rsquo;s classic study</a>&mdash;they&rsquo;re less likely to grab sunscreen afterwards, strongly suggesting that this effect is mediated by the psychological discomfort characteristic of dissonance. These results indicate that vicarious dissonance involves threats to the subject&rsquo;s self-image that are filtered through shared group identity rather than a rational goal of working out what people&rsquo;s attitudes are.</span><br /><br /><span>Finally, Kevin asks why dissonance should arise in cases where evidence contradicts negative aspects of our self-image. I think there are speculative but plausible irrationalist explanations. For example, suppose evidence of a particular failing (e.g., <em>I&rsquo;m bad at sports</em>) originally caused dissonance, but over time you adjust and maintain a set of beliefs that you&rsquo;re comfortable with (e.g., <em>I&rsquo;m bad at sports, but it doesn&rsquo;t matter because sports are dumb</em>). Now if you face evidence that you don&rsquo;t have that failing, all of your subsequent rationalizations are at risk, creating new openings for harmful conclusions about yourself that you haven&rsquo;t yet built up defenses against (e.g., <em>If I&rsquo;m good at sports, then I was wrong to shy away from them; and I was wrong to think they&rsquo;re dumb; and I was wrong to focus on X instead&hellip;</em>). If belief updating has an immunodefensive function, it makes sense that there should be a drive to stick with old problems for which you&rsquo;ve already developed cognitive coping strategies.</span><br /><br /><span>That said, the story I&rsquo;ve been pushing for is compatible with the co-presence of <a href="https://www.sciencedirect.com/science/article/pii/S0065260108004036#bb0034">rationally motivated drives</a> for consistency, and/or more <a href="https://journals.sagepub.com/doi/10.1111/j.1467-9280.2007.02012.x">primitive analogues of dissonance</a>. Thus it&rsquo;s worth ending on a conciliatory note: the picture I&rsquo;ve sketched of a psychological immune system is not only compatible with other rational forms of belief updating, it really only makes sense alongside rational updating. The empirical evidence shows that people are extremely good at inferring consequences of their behavior for their self-image, and it is only because these inferences are so sensitive to the strength and content of incoming evidence that they create a threat to self-esteem. This threat then has to be compensated for by introducing motivational forces that push cognition in a direction that preserves a stable, positive self-concept. Rationalization suggests that the function of cognition is not exclusively rational, but that doesn&rsquo;t undermine the existence or importance of rational belief updating.</span></div>]]></content:encoded></item><item><title><![CDATA[Thoughts on Rationalization]]></title><link><![CDATA[https://www.kevindorst.com/stranger_apologies/thoughts-on-rationalization]]></link><comments><![CDATA[https://www.kevindorst.com/stranger_apologies/thoughts-on-rationalization#comments]]></comments><pubDate>Sat, 04 Jul 2020 10:06:49 GMT</pubDate><category><![CDATA[Rationalization]]></category><guid isPermaLink="false">https://www.kevindorst.com/stranger_apologies/thoughts-on-rationalization</guid><description><![CDATA[(1000 words; 5 minute read.)Here are a few thoughts I had after reading Jake Quilty-Dunn’s excellent guest post, in which he makes the case for the irrationality of rationalization (thanks Jake!).&nbsp; I’ll jump right in, so make sure you take a look at his piece before reading this one.I very much like Jake’s distinction between merely positing an input-output function to explain (ir)rational behavior, vs. positing a particular causal process underlying it.&nbsp; It seems absolutely righ [...] ]]></description><content:encoded><![CDATA[<div class="paragraph"><font size="3"><font color="#818181" style="">(1000 words; 5 minute read.)</font><br></font><br><font color="#000000">Here are a few thoughts I had after reading Jake Quilty-Dunn&rsquo;s</font> <a href="https://www.kevindorst.com/stranger_apologies/rationalization-and-irrationality-guest-post" style="color: rgb(0, 0, 0);">excellent guest post</a><font color="#000000">, in which he makes the case for the irrationality of rationalization (thanks Jake!).</font><span style="color: rgb(0, 0, 0);">&nbsp;</span> <font color="#000000">I&rsquo;ll jump right in, so make sure you take a look at his piece before reading this one.</font><br><span></span></div><div><!--BLOG_SUMMARY_END--></div><div class="paragraph">I very much like Jake&rsquo;s distinction between merely positing an input-output function to explain (ir)rational behavior, vs. positing a particular causal process underlying it.<span>&nbsp;</span> It seems absolutely right to me that this is a strategy that has the potential to make progress on debates over (ir)rationality.<span>&nbsp;</span> And I definitely agree that the evidence he presents raises some challenges for a rational picture of rationalization!<br><br>But I want to raise a couple questions to probe how severe those challenges are. As I understand it, there are two main pieces of evidence in favor of the (irrational) dissonance-reduction mechanism: (1) people&rsquo;s tendency to rationalize is mediated by negative affect (the desire to avoid an unpleasant feeling), and (2) the way they rationalize tends to be closely linked to maintaining stable motivation.<br><br>My questions: Should (1) be surprising on a rational retelling of the narrative? And how strong is the evidence for (2)?<br><br>To (1): if rationalization were generally a result of rational revisions of beliefs, would it be surprising that this process is mediated through discomfort?<span>&nbsp;</span> In general, processes that take cognitive effort require some motivational push to get people to do them. Given that, why not think that the negative affect associated with dissonance is the mind&rsquo;s signal to itself, ``Hey&ndash;&ndash;figure this out!&rdquo;<br><br>I think a similar feeling will be familiar to lots of people&mdash;especially those who&rsquo;ve done much math, coding, or logic. There&rsquo;s a distinctively exciting&ndash;&ndash;but also agonizing&ndash;&ndash;feeling of being stuck on a proof or problem; a strong discomfort with not knowing how to figure out the answer, which can be quite<span>&nbsp;</span> motivating.<br><br>More generally, I suspect curiosity is often driven by some form of negative affect. For an example, consider this riddle:</div><blockquote><span style="color:rgb(42, 42, 42)">On Saturday, Becca starts climbing a mountain at 9am, gets to the top at 6pm, and spends the night there.&nbsp;&nbsp;On Sunday, she starts walking down (on the same path she came up) at 9am, and gets to the bottom at 6pm.&nbsp;Question: is there a time of day&nbsp;<em>t</em> such that, at <em>t</em>, she was at exactly the same location on both Saturday and Sunday? (You don't have to say what the time is; just whether there is guaranteed to be such a time.)<br>&#8203;(Credit:&nbsp;</span><a href="https://www.amazon.co.uk/Thought-Experiments-Roy-Sorensen/dp/019512913X">Roy Sorensen</a><span style="color:rgb(42, 42, 42)">.)</span></blockquote><div class="paragraph">Think about it. I&rsquo;m not going to tell you the answer (yet).&nbsp;&nbsp;But I&rsquo;ll tell you that the answer is neat&ndash;&ndash;most people have an &ldquo;Aha!&rdquo; moment when they figure out.&nbsp;&nbsp;So keep thinking it over&hellip;<br><br>As you do so, reflect on how it&nbsp;<em>feels</em>&nbsp;to think it over and not know the answer.&nbsp;&nbsp;A bit uncomfortable, maybe even frustrating, isn&rsquo;t it?&nbsp;&nbsp;Curiosity&ndash;&ndash;the scientists&rsquo; famous, &ldquo;Hm, that&rsquo;s funny&hellip;&rdquo;&ndash;&ndash;can be quite the motivator!&nbsp; I appeal to authority:<br>&#8203;<br></div><div><div id="180692008423007673" align="left" style="width: 100%; overflow-y: hidden;" class="wcustomhtml"><img src="https://imgs.xkcd.com/comics/nerd_sniping.png" alt="Trulli" width="625"> </div></div><div class="paragraph"><span style="color:rgb(42, 42, 42)"><br>&#8203;The point: I wonder how surprising it would be, on a rational picture, that rationalization is mediated by a negative feeling. Might that feeling be your mind&rsquo;s signal to itself to figure something out&ndash;&ndash;be it a riddle about a backpacker, or a tension in your web of beliefs?</span><br><br><span style="color:rgb(42, 42, 42)">This would predict, for instance, that giving people a sugar pill and telling them it&rsquo;ll make them uncomfortable would lead to fewer people solving riddles like the one above. I have no idea if that&rsquo;s right! But it seems like a not-bonkers idea to test.</span><br><br><span style="color:rgb(42, 42, 42)">(Okay okay, the answer to the riddle: imagine that Sati starts walking up the mountain, and Sunny starts walking down it, at 9am&nbsp;</span><em style="color:rgb(42, 42, 42)">on the same day</em><span style="color:rgb(42, 42, 42)">. Will there be a time&nbsp;</span><em style="color:rgb(42, 42, 42)">t</em><span style="color:rgb(42, 42, 42)">&#8203; at which they are both at the same location?)</span></div><div class="paragraph"><span style="color:rgb(42, 42, 42)">Turn to (2): the idea that rationalization is geared toward maintaining motivation.&nbsp;&nbsp;I&rsquo;m wondering about how strong the evidence is for this claim; here are two specific questions.</span><br><br><span style="color:rgb(42, 42, 42)">First: can the dissonance studies be replicated third-personally?&nbsp;&nbsp;Meet Jim. He&rsquo;s nice,&nbsp;</span><a href="https://i2.wp.com/www.sporcle.com/blog/wp-content/uploads/2020/04/4-2.jpg?resize=1280%2C720&amp;ssl=1" target="_blank">works for a paper company</a><span style="color:rgb(42, 42, 42)">, has a happy marriage, and volunteers with a senior-services organization on the weekends.</span><br><br><span style="color:rgb(42, 42, 42)">Jim just participated in an experiment. He sat in a room and adjusted a knob back and forth, following a randomly moving dot, for 20 minutes.&nbsp;&nbsp;The experimenter then asked him if he&rsquo;d be wiling to tell another subject sitting outside that the task was fun, and offered him 1 dollar to do so. Jim did so.&nbsp;&nbsp;Question: what&rsquo;s your estimate for how fun the knob-adjusting activity was, on a scale of 1&ndash;10?</span><br><br><span style="color:rgb(42, 42, 42)">You see the parallel, of course.&nbsp; My guess is that if we ran both this type of question and a variant on which the experimenter offered Jim 100 dollars, people&rsquo;s estimate of how fun the activity was would be higher in the 1-dollar than the 100-dollar condition.&nbsp;&nbsp;I haven&rsquo;t found any studies like this (Jake, do you know of any?), but let&rsquo;s suppose that&rsquo;s right.</span><br><br><span style="color:rgb(42, 42, 42)">In this case, it seems relatively clear that the explanation would go through the fact that you think Jim is a nice person.&nbsp; A nice person saying that an experiment was fun is more evidence that it&rsquo;s fun if he had little incentive to do so, than it is if he had a lot of incentive to do so.</span><br><br><span style="color:rgb(42, 42, 42)">In other words: it seems like the motivation-maintenance explanation would have trouble explaining third-personal cases like this, where your own self-image isn&rsquo;t at issue.&nbsp;&nbsp;Of course,&nbsp;if&nbsp;there&rsquo;s a big asymmetry between these types of cases, that&rsquo;s a good prediction of the dissonance theory! But I&rsquo;m a bit suspicious about such an asymmetry.</span><br><br><span style="color:rgb(42, 42, 42)">Final question: if the purpose of rationalization really is to maintain motivation, why&ndash;&ndash;as Jake says&ndash;&ndash;does dissonance reduction lead people to prefer consistent&nbsp;</span><em style="color:rgb(42, 42, 42)">but negative</em><span style="color:rgb(42, 42, 42)">&nbsp;self-appraisals? Why not instead create a (perhaps inconsistent but) rosy view of yourself, period? In other words, why does consistency play the role it does in cognitive-dissonance effects?</span><br><br><span style="color:rgb(42, 42, 42)">Jake has plenty of things to say about this! So this is in part just an invitation for him to say more. But I think it&rsquo;s worth flagging that the role of consistency in cognitive dissonance is straightforwardly predicted by (epistemically) rational accounts of it&ndash;&ndash;consistency is a means of getting to the truth&ndash;&ndash;but that irrationalist accounts need to say something further to explain it.</span><br><br><br><span style="color:rgb(42, 42, 42)">What next?</span><br><strong style="color:rgb(42, 42, 42)">Check our&nbsp;<a href="https://www.kevindorst.com/stranger_apologies/thoughts-on-thoughts-on-rationalization-guest-post" target="_blank">Jake's reply</a></strong><span style="color:rgb(42, 42, 42)">, which (among other things) discusses some interesting experiments bearing on my empirical questions.</span></div>]]></content:encoded></item><item><title><![CDATA[Rationalization and (Ir)rationality (Guest Post)]]></title><link><![CDATA[https://www.kevindorst.com/stranger_apologies/rationalization-and-irrationality-guest-post]]></link><comments><![CDATA[https://www.kevindorst.com/stranger_apologies/rationalization-and-irrationality-guest-post#comments]]></comments><pubDate>Fri, 26 Jun 2020 07:00:39 GMT</pubDate><category><![CDATA[Rationalization]]></category><guid isPermaLink="false">https://www.kevindorst.com/stranger_apologies/rationalization-and-irrationality-guest-post</guid><description><![CDATA[This is a guest post from Jake Quilty-Dunn&nbsp;(Oxford / Washington University), who has an&nbsp;interestingly different take on the question of rationality than I do. This post is based on a larger project; check out the&nbsp;full paper here.&#8203;(2500 words; 12 minute read.)  &#8203;Is it really possible that people tend to be rational?On the one hand, Kevin and others who favor &ldquo;rational analysis&rdquo; (including Anderson, Marr, and leading figures in the recent surge of Bayesian co [...] ]]></description><content:encoded><![CDATA[<div class="paragraph"><font color="#818181" size="3">This is a guest post from <a href="https://sites.google.com/site/jakequiltydunn/home" target="_blank" title="">Jake Quilty-Dunn</a>&nbsp;(Oxford / Washington University), who has an&nbsp;interestingly different take on the question of rationality than I do. This post is based on a larger project; check out the&nbsp;<a href="https://philpapers.org/rec/QUIURO" target="_blank" title="">full paper here</a>.<br />&#8203;<br />(2500 words; 12 minute read.)</font></div>  <div class="paragraph"><br /><span>&#8203;Is it really possible that people tend to be rational?</span><br /><br /><span>On the one hand, Kevin and others who favor &ldquo;rational analysis&rdquo; (including <a href="https://psycnet.apa.org/record/1990-98299-000" target="_blank">Anderson</a>, <a href="https://mitpress.mit.edu/books/vision">Marr</a>, and <a href="https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/precis-of-bayesian-rationality-the-probabilistic-approach-to-human-reasoning/164A7469C0C255528A77C61CE3C9C771">leading figures</a> in the <a href="https://science.sciencemag.org/content/349/6245/273.abstract">recent surge</a> of <a href="https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/resourcerational-analysis-understanding-human-cognition-as-the-optimal-use-of-limited-computational-resources/586866D9AD1D1EA7A1EECE217D392F4A">Bayesian cognitive science</a>) have made the theoretical point that <a href="https://www.kevindorst.com/stranger_apologies/why-rationalize-look-and-see">the mind evolved to solve problems</a>. Looking at how well it solves those problems&mdash;in perception, motor control, language acquisition, sentence parsing, and elsewhere&mdash;it&rsquo;s hard not to be impressed. You might then extrapolate to the acquisition and updating of beliefs and suppose those processes ought to be optimal as well.</span><br /><br /><span>On the other hand, many of us would like simply to point to our experience with other human beings (and, in moments of brutal honesty, with ourselves). That experience seems on its face to reveal a litany of biases and irrational reactions to circumstances, generating not only petty personal problems but even larger social ills.&nbsp;</span></div>  <div>  <!--BLOG_SUMMARY_END--></div>  <div class="paragraph"><span>A prominent research program in social psychology has lent scientific legitimacy to this pessimistic picture of human nature. People irrationally ignore base rates, or form overconfident judgments of their own performance, or harbor implicit biases, or fall prey to various provable &ldquo;fallacies.&rdquo; Researchers have been eager to enumerate biases and heuristics and construct a taxonomy of irrational thought patterns. One might optimistically think that, if we can only learn to correct for these design flaws, our species could have some hope of transcending our fallen cognitive nature.<br /><br />&#8203;But, to return to the rational analysis approach, there&rsquo;s something strange about this picture of human cognition as irrational: <em>if we&rsquo;re so good at perceiving, parsing, acting, and other complicated cognitive tasks, why should we be so bad at reasoning?</em> Furthermore, is the empirical evidence for irrationality really that strong?&nbsp;<br /><br />Kevin and others (including other contributors to this blog) have provided compelling critiques of specific claims made by cognitive scientists and philosophers in the broadly &ldquo;irrationalist&rdquo; tradition. These critiques often target a particular style of argument for irrationalism. Researchers may have the impression that a certain pattern of judgment, such as metacognitive overconfidence, is obviously irrational. Then, when we find results suggesting that people exhibit that pattern, we add it to the pile of evidence supporting a dismal view of human rationality. But this style of evidence-gathering is vulnerable to rational reconstruction&mdash;if we think again about what rationality requires, perhaps <a href="https://www.kevindorst.com/stranger_apologies/overconfidence">overconfidence</a>, <a href="https://www.kevindorst.com/stranger_apologies/a-glass-half-full">framing effects</a>, <a href="https://www.kevindorst.com/stranger_apologies/hindsight-bias-can-be-rational-brian-hedden">hindsight bias</a>, <a href="https://www.kevindorst.com/stranger_apologies/the-gamblers-fallacy-is-not-a-fallacy">the gambler&rsquo;s fallacy</a>, <a href="http://dx.doi.org/10.3998/ergo.12405314.0006.040">the sunk cost fallacy</a>, etc., are not so irrational after all.&nbsp;<br /><br />Disagreements of this sort typically concern <em>how to model a given input-output function</em>: can we imagine a rational/irrational creature that would exhibit just this sort of cognitive response (output) to just this sort of evidence (input)? This is a productive debate, and rationalists are providing useful challenges to the default status claimed by irrationalist models. But I don&rsquo;t think the evidence at issue here provides the strongest case for irrationalism.<br /><br />Instead, I want to focus on <em>cognitive processes underlying belief updating</em>. Rather than observing an input-output function and debating over whether to model it rationally or irrationally, it would be nice if we had some evidence about what the actual causal processes are that underwrite a pattern of belief change. We could then judge whether that process looks to have the rationally optimal character of vision and motor control (though there may be <a href="https://doi.org/10.31234/osf.io/e5gy3">counterexamples even in vision</a>), or whether it has another, rationally degenerate purpose. So one task for the irrationalist is to specify a non-rational but adaptive purpose for belief updating, and then find evidence of cognitive mechanisms that seem geared toward that purpose rather than a rationalistic one.<br /><br />Fortunately (or unfortunately, depending on your allegiance), I think there is evidence of just this sort concerning <strong><em>rationalization</em></strong>. I&rsquo;ll focus on the generation and reduction of <strong>cognitive dissonance</strong>. This focus is partly because of space limitations and partly because dissonance is arguably the strongest and best-studied example of a cognitive process underlying rationalization that has an expressly irrational character and purpose.&nbsp;</span></div>  <div class="paragraph"><br />&#8203;&#8203;The basics: When people encounter evidence that contradicts one of their strongly held beliefs, they experience an unpleasant feeling of cognitive dissonance, a sort of psychological pain. Cognitive dissonance has not only a <em>negative affect</em>&mdash;the unpleasant feeling&mdash;but also a <em>motivational force</em>, akin to the motivational force to eat that accompanies hunger. But cognitive dissonance doesn&rsquo;t motivate us to eat (except when it does&ndash;&ndash;<a href="https://www.sciencedirect.com/science/article/abs/pii/S0195666310003648">meat-eating</a> is often <a href="https://www.sciencedirect.com/science/article/abs/pii/S0195666317305329">sustained</a> through dissonance reduction); it motivates us to <em>reconcile the contradiction in a way that palliates the negative feeling of dissonance</em>.<br /><br /><span>Here&rsquo;s an example. Suppose a kindly experimenter asks you to twiddle a knob for twenty minutes. After completing this boring task, the experimenter then tells you there&rsquo;s another participant waiting outside, and asks whether you would please tell them that, contrary to the experience you just had, the task was actually <em>fun</em>. If you&rsquo;re lucky, she offers you 100 dollars to do this, and if you&rsquo;re unlucky, she offers you 1 dollar. After you tell the participant that the task was fun, you&rsquo;re then asked for your true beliefs about how fun the task was.<br /><br />Here&rsquo;s the result: if you were paid 100 dollars, you think the task was boring and that you lied when you said it was fun. But if you were paid 1 dollar, you think the task was fun and that you were accurately relaying information to the other participant. (This description is adapted from <a href="https://psycnet.apa.org/record/1960-01158-001">a classic experiment</a>.)&nbsp;<br /><br />Why should getting paid <em>less</em> money make you think the task was <em>more</em> fun? Since the task was really boring, acknowledging that fact means acknowledging that you lied when you said it was fun. If you were paid 100 dollars, it&rsquo;s easy to admit that you&rsquo;d tell a harmless lie for that much money. But if you were only paid 1 dollar, it&rsquo;s harder to explain your own behavior without facing some harsh truths. Is it that you&rsquo;re petty enough that you&rsquo;ll lie for 1 dollar? Or suggestible enough that you&rsquo;ll lie for 1 dollar as long as an authority figure requests it? Or oblivious enough that you were incorrect about your own judgment about the task?<br /><br />There is an irrationalist explanation of effects like this. When faced with a contradiction between your beliefs about yourself&ndash;&ndash;<em>I&rsquo;m good, I&rsquo;m rational, I&rsquo;m competent</em>&mdash;and your knowledge of your own behavior&ndash;&ndash;<em>I just told a stranger that this boring task was fun because a psychologist offered me 1 dollar</em>&mdash;you experience cognitive dissonance. (This is the generation of dissonance.) This negative feeling motivates you to push your attitudes around in a way that alleviates psychological discomfort. Often enough, the easiest way to do so is simply to change your belief so that your behavior accords with it. In this case, you change your judgment about the task: the task was really fun after all, so when you told that stranger that the task was fun, you were just honestly reporting your beliefs! No obvious negative implications about your personality follow, so you can rest easy. (This is the reduction of dissonance.)<br /><br />This explanation might strike you as possible, but thoroughly non-obvious and maybe even overly complex. To be sure, alternative rational reconstructions abound. For instance, <a href="https://psycnet.apa.org/record/1967-13584-001">Daryl Bem</a> (before his <a href="https://psycnet.apa.org/buy/2011-01894-001">forays into ESP</a>) argued that what&rsquo;s really going on in experiments like this is that people don&rsquo;t have direct access to their attitudes and must infer them from their behavior. So, since I told this person the task was fun and 1 dollar is not enough to make me lie, I must have done it because I really believe the task is fun.&nbsp;<br /><br />In a forthcoming paper called <a href="https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/rationalization-is-rational/2A13B99ED09BD802C0924D3681FEC55B">&ldquo;Rationalization is Rational&rdquo;</a>, Fiery Cushman argues that we rationalize in order to work out what it would be most rational for us to believe given our behavior, thereby facilitating a form of learning he calls &ldquo;representational exchange,&rdquo; wherein we acquire conscious propositional attitudes that repackage the murkier unconscious sources of our original action in a form we can consciously act on. Even if I didn&rsquo;t say the task was fun because I actually believed it, I rationally ought to adopt that belief anyway because it makes the most sense out of my previous action and is therefore likely to afford beneficial actions in the future.<br /><br />Now it looks like we&rsquo;ve ended up in the same position as ever: we have a pattern of belief change in response to evidence, and we can model it as rational or irrational. Fortunately, however, the irrationalist model we get from dissonance theory affords more concrete claims about the causal processes underlying this belief change. Specifically, according to the version of dissonance theory I articulated above, rationalization is <em>self-serving</em>.&nbsp;What I mean by this is twofold:</span><ol><li><span>The immediate goal in rationalizing is to make a bad feeling go away&mdash;that is, we rationalize to make ourselves feel better, not to get at the truth or be more rational.</span></li><li><span>The broader purpose of rationalization is to avoid damage to core aspects of our self-image and thereby ward off persistent bad feelings (e.g., depression) and maintain stable motivation.</span></li></ol> These two claims are clearly opposed to rationalistic models, since they assert that both the immediate aim and broader (possibly evolutionary) function of rationalization is not to acquire true beliefs, knowledge, or other such rationalistic goals. Instead, we push our beliefs around to avoid bad feelings with the broader purpose of maintaining self-esteem and motivation.&nbsp;<br /><br />Claims 1 and 2 also give us some empirical predictions: first, rationalization should be mediated by negative affect, and second, rationalization should be responsive to self-esteem. The evidence supports these claims.&nbsp;<br /><br />Evidence for Claim #1: When people are in dissonance experiments like the one mentioned above, <a href="https://psycnet.apa.org/fulltext/1995-05331-001.html">they report feeling bad</a>. Even more importantly, they show implicit marks of negative affect, such as measurable changes in <a href="https://psycnet.apa.org/fulltext/1986-27173-001.html">electrical activity on the skin</a> that signal stress as well as <a href="https://www.sciencedirect.com/science/article/abs/pii/S1053811912011469">neural activation</a> in regions linked to negative emotion. If these bad feelings are driving the belief change, then mitigating the feelings or making subjects think the feelings derive from an irrelevant source should mitigate the belief change as well. And that&rsquo;s just what happens: <a href="https://psycnet.apa.org/record/1982-09429-001">drinking alcohol</a> after dissonance is induced minimizes rationalization, and <a href="https://psycnet.apa.org/record/1974-32359-001">giving people a sugar pill</a> and telling them it will make them uncomfortable makes them misattribute the negative affect of dissonance to the pill, and therefore makes them less likely to shift their attitudes.<br /><br />Evidence for Claim #2: The evidence here is more indirect. But a simple prediction is that lowering people&rsquo;s self-esteem (e.g., by giving them a bogus personality assessment with harsh results) should minimize dissonance-based rationalization. This prediction <a href="https://psycnet.apa.org/record/1965-09962-001">turns out to be true</a>. And if, after dissonance is generated, you&rsquo;re reminded of some of your good qualities (which are strictly irrelevant to the task at hand), <a href="https://www.sciencedirect.com/science/article/abs/pii/S0022103103000180">rationalization is again minimized</a>.<br /><br />All this evidence suggests that the way we shift our beliefs in response to evidence of our own failings has an irrational character. We first feel bad in response to the attack on our self-esteem, and we then respond by unconsciously shifting our attitudes around in a way that alleviates the negative feeling and preserves our image of ourselves as good, rational, competent people.<br /><br />My goal here is not to argue that the rational-analysis perspective has no way of modeling these phenomena (though <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/mila.12205" target="_blank">Eric Mandelbaum</a> makes that argument). Instead of testing the limits of rationalistic modeling, we can look &ldquo;under the hood&rdquo; at the actual processes driving belief change. Once we do, we find these effects are driven by negative affect and self-esteem. Irrationalists (of a certain sort) have a readymade explanation: we often update beliefs through rationalization, which is geared toward eliminating negative affect and has the purpose of protecting our image of ourselves and thereby maintaining stable motivation and avoiding depression, anxiety, and other maladaptive states of mind. A Bayesian might be able to rationally reconstruct the belief-updating patterns observed in dissonance experiments. But the challenge for them presented by dissonance-based rationalization is not met merely by developing a rationalistic model. Instead, they need also to provide an equally good explanation for why these changes in belief are (i) predictably motivated by negative affect and (ii) predictably responsive to self-esteem.<br /><br />Now I want to circle back to the question of the &ldquo;purpose&rdquo; of belief updating. <a href="https://www.kevindorst.com/stranger_apologies/why-rationalize-look-and-see" target="_blank">Kevin&rsquo;s comparison to visual perception</a> raises the challenge: <em>why should we see rational degeneracy in reasoning when we (apparently) see rational optimality in vision and elsewhere?</em> I think there is a good answer in the case of dissonance-based rationalization.&nbsp;<br /><br />Having persistent negative beliefs about yourself&ndash;&ndash;<em>I&rsquo;m a bad person, I&rsquo;m stupid, I&rsquo;m incompetent</em>&mdash;is plainly not conducive to mental health. Negative thoughts about the self, and the depression and lack of motivation they engender, are not adaptive. But it doesn&rsquo;t take a scientific research program to show that human behavior routinely falls short of our ideals; our species is immoral, irrational, and incompetent as a matter of course. A perfectly optimal Bayesian updater saddled with human flaws is therefore at risk of adopting negative self-appraisals in ways that hamper healthy emotional functioning. Thus the rationalistic model of cognition itself, if true, creates a motivational problem that needs to be solved through non-rationalistic means.<br /><br />The brand of irrationalism that I favor (owing to <a href="https://psycnet.apa.org/doiLanding?doi=10.1037%252F0022-3514.75.3.617">Gilbert</a> and Mandelbaum) posits a <strong>psychological immune system</strong> to address this design flaw. Dissonance forms one mechanism in this immune system, and its purpose is to push our beliefs in a direction that allows us to maintain a positive self-image despite evidence to the contrary, and thereby keep us emotionally stable and functional. To fulfil this function, our minds generate cognitive dissonance when our otherwise rational belief-updating processes deliver conclusions that threaten our sense of self-worth, which motivates us to rearrange our beliefs in ways that defang the threatening evidence.<br /><br />I doubt that this picture of cognition will settle the dispute between rationalistic and irrationalistic models of belief change. But it does put pressure on purely rationalistic models, and it moves the debate past competing models of input-output functions toward competing theories of the structure and function of causal cognitive processes.<br /><br />It matters whether it&rsquo;s true that rationalization has the structure I&rsquo;ve described, and not just for theoretical purposes. Cognitive dissonance is sensitive to group membership&mdash;when our group is criticized, we experience dissonance and feel compelled to rationalize it away. This makes dissonance a plausible mechanism contributing to what Charles Mills calls <a href="https://philpapers.org/rec/MILWI-3">&ldquo;white ignorance&rdquo;</a>, including (e.g.) the persistent ignorance of white Americans of the <a href="https://www.nytimes.com/interactive/2020/06/24/magazine/reparations-slavery.html" target="_blank">continuing history of racism in the United States</a>. Dissonance pushes people to preserve tribal allegiance as well as seek self-exonerating narratives when their morality is in question (as seen in the meat-eating literature). When these two factors are combined, rampant rationalization is likely.<br /><br />Changing these tendencies requires understanding the psychological processes responsible for them. Unfortunately, doing so may force us to give up on a rationalistic picture of human nature.<br /><br /><br />What next?<br /><strong>If you want to hear more about the (potential) adaptive role of irrationality,</strong> check out&nbsp;Lisa Bortolloti's&nbsp;<em><a href="https://books.google.co.uk/books?hl=en&amp;lr=&amp;id=b-vqDwAAQBAJ" target="_blank">The Epistemic Innocence of Irrational Beliefs</a>&nbsp;</em>or&ndash;&ndash;from a different angle&ndash;&ndash;this paper on the <a href="https://www.nature.com/articles/nature10384" target="_blank">evolution of overconfidence</a>.<br /><strong>If you want to hear more about rationalization,</strong>&nbsp;check out <a href="https://philpapers.org/rec/QUIURO" target="_blank">Jake's full paper</a>, the recent discussion piece by Fiery Cushman, <a href="https://www.cambridge.org/core/services/aop-cambridge-core/content/view/2A13B99ED09BD802C0924D3681FEC55B/S0140525X19001730a.pdf/rationalization_is_rational.pdf" target="_blank">"Rationalization is Rational", and the replies to it</a>&nbsp;(including <a href="https://philpapers.org/rec/QUIRII" target="_blank">Jake's</a>!), and recent articles by <a href="https://philpapers.org/rec/DCRREA" target="_blank">Jason D'Cruz</a> and <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/mila.12205" target="_blank">Eric Mandelbaum</a>.</div>]]></content:encoded></item><item><title><![CDATA[The Gambler's Fallacy is Not a Fallacy]]></title><link><![CDATA[https://www.kevindorst.com/stranger_apologies/the-gamblers-fallacy-is-not-a-fallacy]]></link><comments><![CDATA[https://www.kevindorst.com/stranger_apologies/the-gamblers-fallacy-is-not-a-fallacy#comments]]></comments><pubDate>Fri, 08 May 2020 08:15:33 GMT</pubDate><category><![CDATA[All Most Read]]></category><category><![CDATA[Gambler's Fallacy]]></category><guid isPermaLink="false">https://www.kevindorst.com/stranger_apologies/the-gamblers-fallacy-is-not-a-fallacy</guid><description><![CDATA[(2900 words; 15 minute read.)[5/11 Update: Since the initial post, I've gotten a ton&nbsp;of extremely helpful feedback (thanks everyone!). In light of some of those discussions I've gone back and added a little bit of material. You can find it by skimming for the purple text.]​[5/28 Update:&nbsp;If I rewrote this now, I'd now reframe the thesis as: "Either the gambler's fallacy is rational, or it's much less common than it's often taken to be––and in particular, &nbsp;standard examples us [...] ]]></description><content:encoded><![CDATA[<div class="paragraph"><span style="color:rgb(129, 129, 129)"><font size="2">(2900 words; 15 minute read.)</font></span></div><div class="paragraph"><font size="3"><font color="#818181">[</font><strong style="color:rgb(129, 129, 129)">5/11 Update:</strong> <font color="#818181">Since the initial post, I've gotten a ton</font><font color="#818181">&nbsp;of extremely helpful feedback (thanks everyone!). In light of some of those discussions I've gone back and added a little bit of material. You can find it by skimming for the</font> <font color="#8640AE">purple text</font><font color="#818181">.]<br>&#8203;<br>[<strong>5/28 Update:</strong>&nbsp;If I rewrote this now, I'd now reframe the thesis as: "Either the gambler's fallacy is rational, or it's much less common than it's often taken to be&ndash;&ndash;and in particular, &nbsp;standard examples used to illustrate it don't do so."]</font></font></div><div class="paragraph"><br><font size="3">&#8203;A title like that calls for some hedges&ndash;&ndash;here are two. &nbsp;First, this is work in progress: the conclusions are tentative (and feedback is welcome!). Second, all I'll show is that rational people would often exhibit this "fallacy"&ndash;&ndash;it's a further question whether real people who actually commit it are being rational.<br><br>Off to it.<br><br>On my computer, I have a bit of code call a "koin". Like a coin, whenever a koin is "flipped" it comes up either heads or tails. I'm not going to tell you anything about how it works, but the one thing everyone should know about koins is the same thing that everyone knows about coins: they tend to land heads around half the time.<br><br>I just tossed the koin a few times. Here's the sequence it's landed in so far:<br>&#8203;</font></div><div class="paragraph" style="text-align:center;"><strong><font size="5">T H T T T T T</font></strong></div><div class="paragraph"><br><font size="3">How likely do you think it is to land heads on the next toss? &nbsp;You might look at that sequence and be tempted to think a heads is "due", i.e. that it's more than 50% likely to land heads on the next toss. After all, koins usually land heads around half the time&ndash;&ndash;so there seems to be an overly long streak of tails occurring.<br><br>But wait! If you think that, you're committing the <strong>gambler's fallacy</strong>: the tendency to think that if an event has recently happened more frequently than normal, it's less likely to happen in the future. That's irrational. &nbsp;Right?<br><br>Wrong. &nbsp;Given your evidence about koins, you&nbsp;<em>should</em>&#8203; be more than 50% confident that the next toss will land heads; thinking otherwise would be a mistake.</font></div><div><!--BLOG_SUMMARY_END--></div><div class="paragraph">I'll spend most of this post defending this claim for koins, and then talk about how it generalizes to real-life random processes&ndash;&ndash;like&nbsp;<em>c</em>oins&ndash;&ndash;at the end.<br><br>But first: why care? People don't appeal to the gambler's fallacy to explain polarization or to demonize their political opponents&ndash;&ndash;so if you're here for those topics, this discussion may seem far afield.<br><br>But I think it's relevant. The irrationality and pervasiveness of the gambler's fallacy is one of the most widespread pieces of irrationalist folklore.&nbsp;<span>It&rsquo;s been</span> <a href="https://en.wikipedia.org/wiki/Gambler%27s_fallacy#Origins">taken to support</a> <span>a variety of unflattering views of the human mind, including a belief in the "</span><a href="https://en.wikipedia.org/wiki/Faulty_generalization#Hasty_generalization">law of small numbers</a>"<span>, a tendency to use</span> <a href="https://en.wikipedia.org/wiki/Representativeness_heuristic">representativeness as a (poor) substitute for probability</a><span>, an</span> <a href="https://en.wikipedia.org/wiki/Locus_of_control">illusion of control</a><span>, and even an (unfounded)</span> <a href="https://en.wikipedia.org/wiki/Just-world_hypothesis">belief in a just world</a><span>.&nbsp; Insofar as a general&nbsp;belief that people are irrational leads us to demonize those who disagree with us&ndash;&ndash;</span><a href="https://www.kevindorst.com/stranger_apologies/plea_for_political_empathy">as I think it does</a><span><span>&mdash;scrutinizing such irrationalist claims is important.</span><br><br><span><font color="#8640AE">So back to gamblers. <strong>What&nbsp;<em>is</em>&nbsp;the gambler's fallacy?</strong> Many have suggested to me that it's the tendency to think that a heads is more likely after a string of tails, despite knowing that the tosses&nbsp;are statistically independent. But this can't be right&ndash;&ndash;for no one commits&nbsp;<em>that</em>&nbsp;fallacy. After all, knowing that the tosses are independent is just&nbsp;knowing that a heads is not more (or less) likely after a string of tails; therefore anyone who thinks that a heads <em>is</em> more likely after a string of tails&nbsp;does <em>not</em>&nbsp;know that the tosses are independent.<br><br>Here's a more plausible account of the (supposed)&nbsp;fallacy. You commit the gambler's&nbsp;fallacy if, purely on the basis of your knowledge that the koin lands heads 50% of the time, you think it's more likely to land heads after a (long string of) tails. That's what I'll argue is&nbsp;rational.</font><br><br>All you know about koins is that they tend to land heads about half the time. You can infer from this that&nbsp;</span><em>on average&ndash;&ndash;</em><span>across all flips&ndash;&ndash;</span>the<span>&nbsp;koin's chance of landing heads on a given toss is around 50%. What are the ways that this could be true?</span></span><br><br>&#8203;One (obvious) possibility is that the chance of heads is&nbsp;<em>always</em>&nbsp;50%. Call this hypothesis:</div><div><div class="wsite-multicol"><div class="wsite-multicol-table-wrap" style="margin:0 -15px;"><table class="wsite-multicol-table"><tbody class="wsite-multicol-tbody"><tr class="wsite-multicol-tr"><td class="wsite-multicol-col" style="width:3.2167832167832%; padding:0 15px;"><div class="wsite-spacer" style="height:50px;"></div></td><td class="wsite-multicol-col" style="width:96.783216783217%; padding:0 15px;"><div class="paragraph"><strong>Steady:</strong>&nbsp;<span>On each toss, the koin has a 50% chance of landing heads.</span></div></td></tr></tbody></table></div></div></div><div class="paragraph">Given your knowledge about koins, you should leave open that Steady is true.<br><br>Should you be <em>sure</em> it&rsquo;s true? If so, then the gambler's fallacy would indeed be a fallacy. But you shouldn't be sure of it, for here are two other hypotheses that would also vindicate your evidence that koins tend to land heads around half the time:<br></div><div><div class="wsite-multicol"><div class="wsite-multicol-table-wrap" style="margin:0 -15px;"><table class="wsite-multicol-table"><tbody class="wsite-multicol-tbody"><tr class="wsite-multicol-tr"><td class="wsite-multicol-col" style="width:2.7932960893855%; padding:0 15px;"><div class="wsite-spacer" style="height:50px;"></div></td><td class="wsite-multicol-col" style="width:97.206703910615%; padding:0 15px;"><div class="paragraph"><strong>Switchy:</strong> When the koin lands heads (tails), it's&nbsp;<em>less</em>&nbsp;than 50% likely to land heads (tails) on the next toss.<br><br><strong>Sticky:</strong> When the koin lands heads (tails), it's&nbsp;<em>more</em>&nbsp;than 50% likely to land heads (tails) on the next toss.</div></td></tr></tbody></table></div></div></div><div class="paragraph">&#8203;The Switchy hypothesis says that the koin has a tendency to switch how it lands. For example, perhaps after landing heads (tails), it's 40% likely to land heads (tails) on the next toss, and 60% likely to switch to tails (heads). &nbsp;Similarly, the Sticky hypothesis says the koin has a tendency to stick to how it lands. For example, perhaps after landing heads (tails) it's 60% likely to stick with heads (tails) on the next toss, and 40% likely to land tails (heads).<br><br>We can represent hypotheses like Steady, Switchy, and Sticky with what are known as <a href="https://en.wikipedia.org/wiki/Markov_chain" target="_blank">Markov chains</a>: a series of states the koin might be in, along with its chance of transitioning from a given state at one time to other states at the next time. &nbsp;For instance, our example of a Switchy hypothesis can be represented like this:</div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/1m-switchy-diagram.png?1588928198" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%">Fig. 1: A Switchy hypothesis</div></div></div><div class="paragraph">This diagram indicates that whenever the koin is in state H (has just landed heads), it's 40% likely to land heads on the next flip and 60% likely to land tails on the next flip. Vice versa for when it's in state T (has just landed tails). &nbsp;We can similarly represent our Sticky and Steady hypotheses this way:</div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/1m-sticky-diagram.png?1588928336" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%">Fig. 2: A Sticky hypothesis</div></div></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/1m-steady-diagram.png?1588928378" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%">Fig. 3: The Steady hypothesis</div></div></div><div class="paragraph">Given their symmetry, all of these hypotheses will make it so that the koin usually lands heads around half the time. (For aficionados: their <a href="https://en.wikipedia.org/wiki/Stationary_distribution" target="_blank">stationary distributions</a> are all 50-50.) Since that's all the evidence you have about koins, you should be uncertain which is true.<br><br>It follows from this uncertainty that, given your evidence, you&nbsp;<em>should</em>&nbsp;commit the gambler's fallacy: when it has just landed tails you should be more than 50% confident the the next toss will land heads; and vice versa when it has just landed heads.<br><br>Why? I'll focus on explaining a simple case; the Appendix below gives a variety of generalizations.&nbsp;<br><br>Let's suppose you can be sure that one of the three particular Sticky/Switchy/Steady hypotheses in Figures 1&ndash;3 are true, but you can't be sure which. &nbsp;Suppose you know that the koin has just landed tails (as it has). &nbsp;Given this, you should be more than 50% confident that it'll land heads&ndash;&ndash;you should commit the gambler's fallacy! &nbsp;There are two steps to the reasoning.<br><br>First, you know that if Switchy is true, it has a 60% chance to land heads; that if Steady is true, it has a 50% chance to land heads; and that if Sticky is true, it has a 40% chance to land heads. &nbsp;So if you were very confident in Switchy, you'd be around 60% confident in heads; if you were very confident in Steady, you'd be around 50% confident in heads; and if you were very confident in Sticky, you'd be around 40% confident in heads. &nbsp;More generally, it follows<span>&nbsp;(from</span> <a href="https://en.wikipedia.org/wiki/Law_of_total_probability">total probability</a> <span>and the</span> <a href="https://link.springer.com/chapter/10.1007/978-94-009-9117-0_14">Principal Principle</a><span>) that your confidence in heads&nbsp;should be a weighted average of these three numbers, with weights determined by how confident you should be in each of Switchy, Steady, an Sticky.<br><br>That is, where P(q) &nbsp;represents how confident you should be in q, your confidence that the next flip will be heads given that it has just landed tails should be:</span></div><div><div id="737005137902067097" align="left" style="width: 100%; overflow-y: hidden;" class="wcustomhtml">   \[P(H) ~=~ P(Switchy)\cdot 0.6 ~+~ P(Steady)\cdot 0.5 ~+~ P(Sticky)\cdot 0.4\]</div></div><div class="paragraph"><span style="color:rgb(42, 42, 42)">Notice: whenever P(Switchy) &gt; P(Sticky), this will average out to something greater than 50%. That is, whenever you should be more confident that the koin is Switchy than that it's Sticky, you should think a heads is more than 50% likely to follow from a tails, and (by parallel reasoning) that a tails is more than 50% likely to follow a heads.</span><br><br><strong style="color:rgb(42, 42, 42)">Upshot:</strong><span style="color:rgb(42, 42, 42)">&nbsp;whenever you should be more confident the koin is Switchy than that it's Sticky, you should commit the gambler's fallacy!</span></div><div class="paragraph" style="text-align:right;"><font color="#818181">(1500 words left.)</font></div><div class="paragraph" style="text-align:justify;">And you&nbsp;<em>should</em>&nbsp;be more confident in Switchy than Sticky&ndash;&ndash;this is step two of the reasoning.<br>&#8203;<br>Why? Since you start out with no evidence either way, you should initially be equally confident in Switch and Sticky. And although both of these hypotheses fit with the observation that the koin tends to land heads about half the time, the Switchy hypothesis makes it&nbsp;<em>more</em>&nbsp;likely that this is so&ndash;&ndash;and therefore is&nbsp;more&nbsp;confirmed than the Sticky hypothesis when you learn that the koin tends to land heads around half the time. &nbsp;This is because Switchy makes it less likely that there will be long runs of heads (or tails) than Sticky does, and therefore makes it more likely the overall proportion of heads will stay close to 50%.<br><br>We can see this in action by working through a small example by hand, and through bigger examples on a computer.<br><br>Small example first. &nbsp;Suppose all you know about the koin is that I've tossed it twice and it landed heads once. Why does Switchy make this outcome more likely that Sticky?<br><br>To land heads on one of two tosses is simply to either land HT or TH, i.e. to land one way initially and then switch. Switchy implies that such a switch is 60% likely, whereas Sticky implies that it is only 40% likely. (Meanwhile, Steady implies that it is 50% likely.) &nbsp;Therefore Switchy makes the "one head in two tosses" outcome more likely than Sticky does.<br><br>It follows, for example, that if you were initially equally confident in each of Switchy, Steady, and Sticky, then after learning that it landed heads once out of two tosses, you should become 40% confident in Switchy, 33% confident in Steady, and 27% confident in Sticky. &nbsp;Plugging these numbers into our above average shows that you should then be a bit over 51% confident that it'll switch again on the next toss&ndash;&ndash;i.e. should commit the gambler's fallacy.<br><br>The reasoning in this small example generalizes. &nbsp;The closer the koin comes to landing heads 50% of the time, the more ways there are to do this that involve switching between heads and tails many times; meanwhile, the closer the koin comes to landing heads 0% or 100% of the time, the fewer switches there could have been. Switchy makes the former sorts of outcomes more likely; Sticky makes the latter sorts of outcomes more likely. So when you learn that the koin tend to land heads roughly 50% of the time, this is more evidence for Switchy than Sticky&ndash;&ndash;and as a result, you should commit the gambler's fallacy.<br><br>So far as I know, there's no tractable formula for determining these likelihoods by hand. &nbsp;But since the systems are Markovian, we can use "dynamic programming" to recursively calculate the likelihoods on a computer.<br><br>For example, if we toss the koin 100 times we can plot how likely each our the three hypotheses would make various proportions of heads:</div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/1m-simple-100toss_orig.jpg" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%">Fig. 4: Likelihoods of number of heads in 100 tosses</div></div></div><div class="paragraph">Note that although all three hypotheses generate bell-shaped curves centered around 50% heads, the Switchy hypothesis generates a&nbsp;<em>tighter</em> bell curve around 50% heads.<br><br><font color="#8640AE"><strong>This is the crucial point.</strong> Take any precise statement of what you know about koins&ndash;&ndash;namely, that they "land heads around half the time". A precise version of that claim will take the form "the koin lands heads between (50 &ndash; <em>c</em>)% and (50 + <em>c</em>)% of the time" for some&nbsp;<em>c.</em>&nbsp;(For example, "the koin lands heads between 48% and 52% of the time.") Switchy generates a higher bell curve that Sticky, meaning it makes any such claim more likely than Sticky does&ndash;&ndash;and therefore is&nbsp;<em>more</em>&nbsp;confirmed by what you know than Sticky is.<br><br>For example here's how likely each of our three hypotheses would make it that the koin lands heads "roughly 50" times out of 100 tosses, under various sharpenings of the claim:</font></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/20-5-11-roughly-50-likelihoods_orig.png" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%">Fig. 5: How likely Switchy, Steady, and Sticky each make various "roughly 50 of 100 heads" claims.</div></div></div><div class="paragraph"><font color="#8640AE">Since Switchy makes each of these more likely than Sticky, learning that the koin lands heads "roughly 50" of 100 times provides more evidence for the former.</font><br><br>&#8203;In particular, if you started out&nbsp;&#8531; confident in each of Switchy, Steady, and Sticky, here's how confident you should be in them after updating on various versions of these "roughly 50" claims, along with the resulting you should have that it'll switch on the next flip<span style="color:rgb(42, 42, 42)">:</span></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/roughly-50-table_orig.png" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%">Fig. 6: Rational confidence in the various hypotheses once you learn "roughly 50 of 100 heads", along with the resulting degree to which you should commit the gambler's fallacy.</div></div></div><div class="paragraph">In each case, since you should be more confident in Switchy than Sticky, you should perform the gambler's fallacy.<br><br><font color="#8640AE">That said, as has been helpfully pointed out in the comments, as you observe a long string of tails, this provides evidence for Sticky&ndash;&ndash;so sooner or later, that a long enough streak will make it so that you are no longer more confident in Switchy than Sticky. &nbsp;How quickly this will happen will depend on exactly what version of "the koin tends to land heads roughly half the time" you know beforehand. &nbsp;If all you know is "it landed heads on 50 of 100 tosses", a short streak of tails will dislodge your confidence in Switchy, and in fact make it rational to perform the "<strong>hot hands</strong>" fallacy and expect a&nbsp;<em>tails</em>&nbsp;to be more likely to follow a tails (see the discussion in the Appendix for more on this).<br><br>But for some versions of "the koin tends to land heads roughly half the time", your confidence in Switchy will be much more robust. &nbsp;Here's one&nbsp;version that's not an implausible characterization of what people often know about processes like this.<br><br>Suppose what you know about koins is: "on every set of tosses I've seen, it's landed heads around half the time&ndash;&ndash;sometimes very close to 50%, sometimes a bit further. I can't remember the details, but it's always been between 40&ndash;60%, usually between 45&ndash;55%, and often between 48&ndash;52%". If this is what you know, then&nbsp;every one&nbsp;of those sets of tosses provides more evidence for Switchy over Sticky, meaning your confidence in Switchy will be quite robust.<br><br>For example, suppose you started out &#8531; in each hypothesis and then learned that in 10 sets of 100 tosses each, each set had between 40&ndash;60 heads, 7 of them had between 45&ndash;55 heads, and 4 had between 48&ndash;52 heads. Then you should become 72% confident it's Switchy, 22% confident it's Steady, and 6% confident it's Sticky (see the first "full calculation" section of the Appendix). &nbsp;As a result, you can see a string of up to 7 tails&nbsp;in a row (with no Switches), and still be more confident in Switchy than Sticky&ndash;&ndash;and, therefore, still commit the gambler's fallacy.</font></div><div class="paragraph"><br><u><strong>The Fallacy in Real Life</strong></u><br>That's what we should say about the gambler's fallacy with <em>koins</em>: it's rational.&nbsp;What should we say about the gambler's fallacy in real life?<br><br>I think we should say the same thing. &nbsp;Most people don't&ndash;&ndash;shouldn't&ndash;&ndash;be sure of how the outcomes from (most of) the random processes they encounter are generated. Many of these outcomes plausibly&nbsp;<em>are</em>&nbsp;either Switchy or Sticky&ndash;&ndash;for example, whether it rains on a given day, or whether a post on Twitter will get significant uptake, or whether the next card drawn from this deck is a face card. &nbsp;Many others are at least open to doubt. &nbsp;<br><br>So people&ndash;&ndash;<font color="#8640AE">especially those who haven't taken statistics courses</font>&ndash;&ndash;should often leave open that various versions of the Sticky and Switchy hypotheses are true. And since they don't (can't) keep track of the full sequence of outcomes they've seen, what they know about the processes is often much more coarse-grained&ndash;&ndash;e.g. that a given outcome tends to happen around 50% of the time. (See the Appendix for generalizations to other percentages.)<br><br>As we've just seen, if&nbsp;that's&nbsp;what they know then they are <em>rational</em> to commit the gambler's fallacy. &nbsp;Instead of revealing a basic misunderstanding about statistics, such a tendency may reveal a subtly tuned sensitivity to statistical uncertainty.<br><br>Of course, this doesn't show that the way real people commit the fallacy is rational: they might commit it for the wrong reasons, or in too extreme a way. &nbsp;(See <a href="https://www.kevindorst.com/stranger_apologies/hindsight-bias-can-be-rational-brian-hedden" target="_blank">Brian Hedden's post on hindsight bias</a> for a discussion for how we might probe those questions&ndash;&ndash;and why it is difficult to do so.) &nbsp;But the mere fact that people commit the gambler's fallacy does not, on it's own, provide evidence that they are handling uncertainty irrationally&ndash;&ndash;after all, it's exactly what we'd expect if they&nbsp;<em>were</em>&nbsp;being rational.<br><br><strong>Objection:</strong>&nbsp;What about <em>c</em>oins?&nbsp;Obviously coins have no "memory", so when it comes to coins, people should be certain that hypotheses like Switchy and Sticky are false, and instead be certain that Steady is true.<br><br><strong>Reply:</strong>&nbsp;Should they? Should&nbsp;<em>you</em>? &nbsp;Real coins are much more surprising than statistics textbooks would lead you to think. For example, despite the ubiquity of the notorious &nbsp; "coin of unknown bias", it's actually <a href="http://www.stat.columbia.edu/~gelman/research/published/diceRev2.pdf" target="_blank">impossible to bias a coin toward one of its sides.</a>&nbsp;Perhaps more surprisingly&ndash;&ndash;and more to the point&ndash;&ndash;it turns out that the way real people tend to flip coins leads them to have around a <a href="https://statweb.stanford.edu/~susan/papers/headswithJ.pdf" target="_blank">51% chance to land the side that was originally facing up</a>. So depending on the procedure you use for flipping your coin repeatedly (do you turn it over, or not, when you go to flip it again?), Steady may actually be false and some version of Switchy or Sticky true!<br><br>Given subtleties like that, it's rather implausible to insist that someone who has never taken a statistics course nor studied coins in any detail should be <em>certain</em> that hypotheses like Sticky and Switchy are false about real coins, or other more complex gambling mechanisms. &nbsp;As we've seen, so long as they shouldn't be certain of that, they&nbsp;<em>should</em>&nbsp;commit the gambler's fallacy.<br><br><strong><u>Conclusion</u></strong><br>Given people's limited knowledge about the outcomes the random processes they encounter and the statistical mechanisms that give rise to them, they often&nbsp;<em>should</em>&nbsp;commit the gambler's fallacy. So the mere fact that they exhibit this tendency should not be taken to show that they handle statistical uncertainty in an irrational way&ndash;&ndash;if anything, it's evidence that they're handling it as they should! &nbsp;At the least, we need more detailed information about the way and degree to which people commit the gambler's fallacy for it to provide evidence of irrationality.<br><br><br>What next?<br><strong>If you have comments, questions, or criticisms, please comment or email me!</strong> As I said, this is work in progress.<br><strong>If you want to see more details,</strong> check out the Appendix below.<br><strong>For more recent work on the gambler's and "hot hands" fallacy,</strong> see&nbsp;this <a href="https://arxiv.org/pdf/1902.01265.pdf" target="_blank">fascinating recent paper</a>.</div><div class="paragraph"><br><br><u><strong><font size="5">Appendix</font></strong></u><br>Here I&rsquo;ll give some generalizations I&rsquo;ve worked out, and some discussion of the robustness of these results.<br>&#8203;<br><font color="#8640AE"><strong><u>The&nbsp;Full Calculation</u></strong><br>Some people have questioned whether the results holds even in the simple case I focus on in the text, so I figured I'd work through the calculations to show that they do.<br><br>Take the versions of the Switchy/Steady/Sticky hypotheses we used above. Suppose you are initially &#8531; confident in each: P(Switchy) = P(Steady) = P(Sticky) = &#8531;.<br><br>Now suppose you learn that 50 of 100 tosses landed heads ("<strong>50H</strong>"). The likelihoods of this given each of the hypotheses are those from Figure 4, reproduced here:</font></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/1m-simple-100toss_orig.jpg" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%"></div></div></div><div class="paragraph"><font color="#8640AE">In particular:</font></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/20-5-11-likelihood-eqns.png?1589215207" alt="Picture" style="width:230;max-width:100%"></a><div style="display:block;font-size:90%"></div></div></div><div class="paragraph"><font color="#8640AE">The posterior credences you should have in each hypothesis follow from Bayes rule, which says that:</font></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/20-5-11-bayes-rule_orig.png" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%"></div></div></div><div class="paragraph"><font color="#8640AE">Parallel calculations show that P(Steady | 50H) &asymp; 0.329 and P(Sticky | 50H) &asymp; 0.268.<br><br>Given that, suppose you've learned that the coin just landed tails. This on its own provides no evidence about Switchy/Steady/Sticky (since you have no information about what state it was in beforehand). &nbsp;Thus your credence that the next toss will land heads should be: 0.403*0.6 + 0.329*0.5 + 0.268*0.4 &asymp; 0.513&ndash;&ndash;you&nbsp;should commit the gambler's fallacy.<br><br>Suppose you have more information about&nbsp;the koin, of the form discussed above. Out of 10 sets of 100 tosses, the koin landed heads between 40&ndash;60 times in all of them, between 45&ndash;55 in 7 of them, and between 48&ndash;52 in 4 of them. &nbsp;(Incidentally, this is what we'd expect you to see if the koin was in fact <em>Steady</em>, and all you remembered was how close it was to 50 heads).&nbsp;<br><br>The likelihoods of 40&ndash;60 heads, given Switchy/Steady/Sticky are 0.99 / 0.96 / 0.91. The likelihoods of 45&ndash;55 heads are 0.82 / 0.73 / 0.63. And the likelihoods of 48&ndash;52 heads are 0.46 / 0.38 / 0.32. &nbsp;The order doesn't matter, so we can just update by each of these likelihoods the relevant number of times (3 for the 40&ndash;60 likelihoods, 3 for 45&ndash;55 ones, and 4 for the 48&ndash;52 ones).<br><br>Starting at &#8531; in each hypotheses and updating on your information about the 10 sets of 100 tosses leaves you with posterior credences of P(Switchy) = 0.72, P(Steady) = 0.22, and P(Sticky) = 0.058.&nbsp;<br><br>Upshot: if your knowledge that the coins land heads "roughly half the time" amounts to knowledge like this&ndash;&ndash;"it always lands heads around 50% of the time, and usually quite close to that"&ndash;&ndash;then you should be&nbsp;<em>much</em>&nbsp;more confident in Sticky over Switchy, and that discrepancy will be robust to seeing a long series of tails in a row, meaning you'll still commit the gambler's fallacy. &nbsp;(In our example, up to 7 tails in a row with no heads and you'll still be more confident in Switchy than Sticky.)</font></div><div class="paragraph"><br><strong style="color:rgb(42, 42, 42)"><u>Generalizing the hypotheses</u></strong><br><span style="color:rgb(42, 42, 42)">We can easily generalize the Sticky/Switchy hypotheses.&nbsp; Let&nbsp;</span><em style="color:rgb(42, 42, 42)">Ch</em><span style="color:rgb(42, 42, 42)">&nbsp;be the objective chance of various outcomes, given how it&rsquo;s landed so far. Suppose you know the koin has landed tails several times in a row, as above.&nbsp; Let&nbsp;</span><em style="color:rgb(42, 42, 42)">H&nbsp;</em><span style="color:rgb(42, 42, 42)">be the claim that it&rsquo;ll land heads on the next toss. There are three possible alternatives: either the objective chance of heads is greater than, equal to, or less than 50%.&nbsp; So we can (re-)define our propositions:</span></div><div><div id="710824449967700028" align="left" style="width: 100%; overflow-y: hidden;" class="wcustomhtml">   \begin{align*} Switchy &amp;= [Ch(H)&gt;0.5] \\ Steady &amp;= [Ch(H)=0.5] \\ Sticky &amp;= [Ch(H)&lt;0.5] \end{align*}</div></div><div class="paragraph">It follows from total probability that your confidence should be the following weighted average:<br><span></span></div><div><div id="777061592666435367" align="left" style="width: 100%; overflow-y: hidden;" class="wcustomhtml">   \begin{align*} P(\small H\normalsize ) ~=~ P(\small Ch(H)&gt;0.5\normalsize )P(\small H\normalsize |\small Ch(H) &gt; 0.5\normalsize) ~+~ P(\small Ch(H)=0.5\normalsize )P(\small H | Ch(H) = 0.5\normalsize) \\ ~+~ P(\small Ch(H)&lt;0.5\normalsize )P(\small H | Ch(H) &lt; 0.5\normalsize ) \end{align*}</div></div><div class="paragraph">It follows from the Principal Principle that P(H | Ch(H) &gt; 0.5) &gt; 0.5 and that P(H | Ch(H) &lt; 0.5) &lt; 0.5.&nbsp; In our situation you have no reason to treat these two options asymmetrically, so there should be some constant c such that P(H | Ch(H) &gt; 0.5) = 0.5 + c, while P(H | Ch(H)&lt;0.5) = 0.5 - c.&nbsp; It follows that:</div><div><div id="827917969692911831" align="left" style="width: 100%; overflow-y: hidden;" class="wcustomhtml">   \[ P(\small H\normalsize ) = P(\small Ch(H)&gt;0.5\normalsize )(\small 0.5+c\normalsize ) + P(\small Ch(H)=0.5\normalsize )(\small 0.5\normalsize ) + P(\small Ch(H)&lt;0.5\normalsize )(\small 0.5-c\normalsize ) \]</div></div><div class="paragraph"><br>And again, this value will be greater than 50% iff P(Switchy) &gt; P(Sticky).<br><br>The trick with using these&nbsp;definitions is that we now need to be careful about what the plausible versions of the Sticky/Switchy hypotheses amount to. We can no longer simply assume they are the 40%-60% hypotheses (from Figures 1&ndash;3) I assumed above, so we can&rsquo;t straightforwardly calculate the likelihoods of various outcomes given Sticky and Switchy. Nevertheless, the plausible versions of these hypotheses will have the same general shape, although some will be more or less extreme in their divergences from 50% probabilities, some may have longer &ldquo;memories&rdquo; so that it takes longer streaks to reach these divergences, and so on. See below for direct handling of some of these issues.<br><br><strong><u><span>Robustness</span></u></strong><br>Since I have no tractable algebraic expression for the likelihoods generated by various Sticky/Switchy hypotheses&mdash;even in the simple cases&mdash;there are limits on what I can prove about it.&nbsp; (Hunch: what matters for the difference in likelihoods between Switchy and Sticky is that the former has a shorter <a href="https://en.wikipedia.org/wiki/Markov_chain_mixing_time">mixing time</a> than the latter; perhaps that can be used in a proof? <strong>Any mathematicians out there to help a philosopher out?)</strong><br><br>Nevertheless, it&rsquo;s easy to check that these results are robust. For example, here are the likelihoods for various proportions of heads from the three simple hypotheses at 10, 50, 100, and 500 tosses. Clearly we are (quickly) approaching a limit in the ratios of likelihoods of 50% heads, and the differences are not washing out.</div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/1m-simple-10toss_orig.jpg" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%"></div></div></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/1m-simple-50toss_orig.jpg" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%"></div></div></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/1m-simple-100toss_orig.jpg" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%"></div></div></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/1m-simple-500toss_orig.jpg" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%"></div></div></div><div class="paragraph">And h<span>ere are graphs that plot the likelihoods considering Switchy and Sticky hypotheses with different levels of probability of sticking or switching at 20, 50, 100, and 300 tosses (for example, Switchy (0.7) has 70% chance of switching; Sticky (0.4) has 40% chance of switching; etc.):</span></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/1m-simple-many-20toss_orig.jpeg" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%"></div></div></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/1m-simple-many-50toss_orig.jpeg" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%"></div></div></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/1m-simple-many-100toss_orig.jpeg" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%"></div></div></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/1m-simple-many-300toss_orig.jpeg" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%"></div></div></div><div class="paragraph"><br><u><strong><span>&#8203;Longer &ldquo;Memories&rdquo;</span></strong></u><br>The explicit versions of the Sticky/Switchy hypotheses we&rsquo;ve looked at so far all had &ldquo;memories&rdquo; of only size 1&mdash;the probabilities of outcomes only depend on how the <em>last</em> toss landed.&nbsp; But both intuitively and <a href="https://www.stat.berkeley.edu/~aldous/157/Papers/croson.pdf">empirically</a>, people are much more likely to commit the gambler&rsquo;s fallacy (or <strong>"hot hands fallacy"</strong>&mdash;see below) with <em>long streaks</em> of outcomes.&nbsp; It&rsquo;s only when tails comes up 4 or 5 or more times in a row that people start to expect a heads.<br><br>&#8203;This can be modeled easily, simply by multiplying the states in our Markov chain.&nbsp; Instead of simply H and T, they will now include how long the streak of heads or tails has been, with the probabilities shifting gradually as the streak builds up.&nbsp; For example, here are diagrams representing 2-memory Switchy and Sticky hypotheses, where the probabilities build to a 60% chance to stick or switch, in both diagram and transition-matrix notation. (For the matrix, row i column j tells you the probability of transitions from state i to state j.) For example, the Switchy hypothesis says that after one heads, it's 55% likely to switch back to tails, and after two or more heads in a row it's 60% likely to switch to tails.</div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/2m-switchy-diagrams_orig.png" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%">2-memory, Switchy (0.6) hypothesis, in both graph and matrix notation.</div></div></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/2m-sticky-diagrams_orig.png" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%">2-memory, Sticky (0.4) hypothesis, in both graph and matrix notation.</div></div></div><div class="paragraph">And though I'm not going to try to draw the 10-state diagram, here's the transition matrix for a 5-memory Switchy hypothesis that grows steadily to a 60% switch rate.</div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/5m-switchy-diagram_orig.png" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%">5-memory Switchy (0.6) hypothesis.</div></div></div><div class="paragraph">As you&rsquo;d expect, the qualitative results from these hypotheses are the same as before, but (very) slightly dampened.&nbsp; For example, here are the likelihoods of various outcomes of 100 tosses from our original 1-memory 60% hypotheses (reproduced from Figure 4), vs. the likelihoods of outcomes with the 2-memory, 3-memory, and 5-memory 60% hypotheses:<br><span></span></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/1m-simple-100toss_orig.jpg" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%">1-memory likelihoods, 100 tosses.</div></div></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/2m-simple-100toss_orig.jpg" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%">2-memory likelihoods, 100 tosses.</div></div></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/3m-simple-100toss_orig.jpg" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%">3-memory likelihoods, 100 tosses.</div></div></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/5m-simple-100toss_orig.jpg" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%">5-memory likelihoods, 100 tosses.</div></div></div><div class="paragraph">It&rsquo;s not until we get "memories" of size 10 or more that we start to see significant dampening of the divergence of likelihoods:<br><br></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/10m-simple-100toss_orig.jpg" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%"></div></div></div><div class="paragraph">And it's worth noting that the&nbsp;<em>qualitative</em>&nbsp;results will be the same in all these cases, though the degree of gambler's fallacy warranted will decrease as the differences in the likelihoods get smaller.<br><br>&#8203;It seems, <a href="https://www.stat.berkeley.edu/~aldous/157/Papers/croson.pdf" target="_blank">empirically</a>, that the&nbsp;<span>versions of the Sticky and Switchy hypotheses that people take seriously are in the 5- to 10-memory range.&nbsp; For robustness checks, I'll show the likelihoods at 10, 50, 100, and 500 tosses for various 5-memory hypotheses whose probabilities move at constant increments up to a given extreme; for example "5-memory Switchy (0.7)" is the chain that takes 5 steps to become 70% likely to switch, and "5-memory Sticky (0.3)"</span> is the chain that takes 5 steps to become 30% likely to switch<span>:</span></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/5m-switchy-0-7-diagram_orig.png" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%">5-memory Switchy (0.7) transition matrix.</div></div></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/5m-sticky-0-3-diagram_orig.png" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%">5-memory, Sticky (0.3) transition matrix.</div></div></div><div class="paragraph">Here are the robustness checks for these and other hypotheses at 10, 50, 100, and 500 tosses:</div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/5m-many-10toss_orig.jpg" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%"></div></div></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/5m-many-50toss_orig.jpg" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%"></div></div></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/5m-many-100toss_orig.jpg" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%"></div></div></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/5m-many-500toss_orig.jpg" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%"></div></div></div><div class="paragraph">Upshot:&nbsp;the qualitative results will be the same under these various more realistic versions of the hypotheses.<br><br><u><strong><span>Hot Hands</span></strong></u><br>The &ldquo;<a href="https://en.wikipedia.org/wiki/Hot_hand">hot hands fallacy</a>&rdquo; is the tendency to think that an outcome is &ldquo;streaky&rdquo; in the sense that if a given outcome happens, it is <em>more</em> likely that it&rsquo;ll happen again on the next trial. &nbsp; In that sense, it's the opposite of the gambler's fallacy: where gambler's expect things to switch, hot-handsers expect things to stick.&nbsp;(The issue from basketball; see <a href="https://doi.org/10.2139%2Fssrn.2627354">this recent paper</a> for a fascinating discussion of why there were statistical mistakes in the original papers claiming to show that there is not "hot hand" in basketball.)<br><br>We saw above that when P(Switchy) &gt; P(Sticky), the gambler&rsquo;s fallacy is rational, and you should be more than 50% confident that the koin will switch how it lands between tosses.&nbsp; By parallel reasoning, whenever P(Switchy) &lt; P(Sticky), it follows that you should be <em>less</em> than 50% confident that the koin will land differently to how it did before&mdash;i.e. you should be more than 50% confident that it will land the same way. In other words, whenever P(Switchy) &lt; P(Sticky), you should commit the hot hands fallacy!<br><br>Upshot: the <em>only</em> time when you should commit neither the gamblers fallacy nor the hot-hands fallacy is when you should be exactly equally confident in Switchy and Sticky: P(Switchy) <strong>=</strong> P(Sticky). Since such a perfect balance of evidence will be rare, you should almost always commit one of these &ldquo;fallacies&rdquo; (though perhaps to only a very small degree).<br><br>In particular, suppose you start out equally confident in each of Switchy and Sticky, and then learn what proportion of times the koin landed heads in some series of tosses.&nbsp; For example, return to our 100-toss example (Figure 4) with the 40/50/60 hypotheses, and recall the likelihoods:</div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/1m-simple-100toss_orig.jpg" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%">1-memory likelihoods, 100 tosses.</div></div></div><div class="paragraph">After learning the proportion of heads out of 100 tosses, should be more confident of Switchy than Sticky iff the blue curve is higher than the green one for the outcome (proportion of heads) you observe, and less confident if vice versa. In fact, there is <em>no</em> outcome of a 100-toss sequence that would make these exactly equal, so given any outcome, you should either commit the gambler&rsquo;s or the hot-hands fallacy on the next toss.<br><br><br><u><strong><span>How can you learn it&rsquo;s Steady?</span></strong></u><br>You might think we&rsquo;ve run ourselves into bit of a paradox here.&nbsp; Note that in all the graphs I&rsquo;ve shown, the Steady likelihoods almost never come out ahead overall.&nbsp; In the middle of the graphs, they are dominated by the Switchy likelihoods, and in the edges of the graph, they are dominated by the Sticky hypotheses.&nbsp; This remains true as we crank of the experiment to arbitrarily many tosses of the koins.&nbsp;<br><br>So&hellip; what gives? Does our reasoning show that it&rsquo;s impossible to learn, by tossing a koin, that it is Steady?&nbsp; If so, it&rsquo;s gone wrong somewhere.<br><br>But it doesn&rsquo;t show that. What it shows is that <em>if all you learn is the proportion of heads</em>, you won&rsquo;t be able to get strong evidence that the koin is Steady.&nbsp; To get that evidence, you&rsquo;d really need to look closely at the <em>sequences</em> you observe.&nbsp;There Switchy hypotheses will make streaks very unlikely, while Sticky hypotheses will make repeated flips unlikely, and the Steady hypotheses will strike a balance.&nbsp; If you looked at the full sequence for long enough, you&rsquo;d &nbsp;almost surely <span style="color:rgb(42, 42, 42)">(in the technical sense)</span>&nbsp;get to the truth of the matter about whether the koin is Sticky, Steady, or Switchy.<br><br>But what we <em>have</em> shown is that <em>without</em> that full data&mdash;with only tracking the proportions of heads&mdash;people will actually <em>not</em> be able to figure out whether the koin is Sticky, Steady, or Switchy.&nbsp;That is an interesting result.&nbsp;Because real people can&rsquo;t keep track of full sequences of tosses--<em>at best</em> they can keep track of (rough) proportions.&nbsp; What our results do show is that given only that information, even perfect Bayesians wouldn&rsquo;t be able to figure out whether the koin is Steady (or Sticky or Switchy).&nbsp;<br><br><u><strong><span>Non-50% versions</span></strong></u><br>Every version we&rsquo;ve looked at so far is one where the number of heads stays around 50%. This is apt for coin tosses, but not so for other chancy events like basketball shots or drawing a face card.&nbsp; We&rsquo;ll need to generalize what plausible Sticky and Shifty hypotheses look like for processes where the average number of heads (or &ldquo;hits&rdquo;) differs from 50%. For example, in the NBA&mdash;where the hot hands fallacy discussion is at home&mdash;shooting percentages are often around 45%.&nbsp;<br><br>The reasoning generalizes, but it gets a bit subtle.&nbsp;In the 50% case, all my examples assumed &ldquo;symmetry&rdquo; in the sense that the probability added (or subtracted) to getting a heads when it just landed heads is the same as that subtracted (or added) to getting a heads when it just landed tails.&nbsp;<br><br>This isn&rsquo;t the right version of symmetry when the Steady hypotheses is no longer 50%. For example, suppose the Steady hypothesis is that no matter how it&rsquo;s landed, the koin has a 40% chance to land heads on each toss.&nbsp; Then we expect that in the long run, it&rsquo;ll land heads in 40% of all tosses. &nbsp;So here's our Steady hypothesis:</div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/non-50-steady-diagram.png?1588958182" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%">40%-heads Steady hypothesis.</div></div></div><div class="paragraph">You might think natural Sticky and Switchy hypotheses centered around this would simply add or subtract a fixed amount (say, 0.1) to the probabilities depending on the state, as before:</div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/non-50-switchy-1.png?1588958367" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%">Conjectured 40%-heads Switchy hypothesis (WRONG).</div></div></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/non-50-sticky-1.png?1588958485" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%">Conjectured 40%-heads Sticky hypothesis (WRONG).</div></div></div><div class="paragraph">But that&rsquo;s wrong.&nbsp; The &ldquo;<a href="https://en.wikipedia.org/wiki/Stationary_distribution">stationary distribution</a>&rdquo; of these two Markov chains is <em>not</em> 40/60&mdash;rather, for the Sticky hypothesis it&rsquo;s around&nbsp;38/62&nbsp;and for the Switchy one it&rsquo;s around 42/58, meaning that in the long run we would expect them to have <em>these</em> proportions of heads to tails, rather than 40-60.&nbsp; Accordingly, the likelihood graphs are not properly overlapping:<br><span></span></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/non-50-100toss-1_orig.jpg" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%">Likelihoods for conjectured (WRONG) 40%-heads hypotheses, 100 tosses.</div></div></div><div class="paragraph">The proper Sticky/Switchy hypotheses if the overall proportion of heads is 40% are ones whose probabilities move depending on where they are, but in a way that leads them to have the same stationary as the Steady hypothesis. I haven&rsquo;t yet figured out a general recipe for this (<strong>any mathematicians care to help?</strong>), but here is an example of 1-memory Sticky and Shifty hypotheses that have the correct stationaries:</div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/non-50-switchy-2.png?1588958820" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%">40%-heads Switchy hypothesis.</div></div></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/non-50-sticky-2.png?1588958884" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%">40%-heads Sticky hypothesis.</div></div></div><div class="paragraph">And here are the likelihood graphs for 100 tosses of these two hypotheses vs. the 40%-heads Steady hypothesis:<br><span></span></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/non-50-100toss-2_orig.jpg" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%">40%-heads likelihoods, 100 tosses.</div></div></div><div class="paragraph">Upshot: the same qualitative lessons hold for processes that don't come up "heads" 50% of the time;&nbsp;<span style="color:rgb(42, 42, 42)">if</span><span style="color:rgb(42, 42, 42)">&nbsp;all you know is that "roughly x% of the tosses land heads", you should be more confident in (the right version of)&nbsp;Switchy than Sticky, and so should commit the gambler's fallacy.<br><br>&#8203;<br><br>...Phew! Those are all the generalizations and further notes I have (for now). If you have any thoughts or feedback, please do send them along! &nbsp;&#8203;Thanks!</span><br><br></div>]]></content:encoded></item><item><title><![CDATA[Hindsight Bias Can Be Rational (Brian Hedden)]]></title><link><![CDATA[https://www.kevindorst.com/stranger_apologies/hindsight-bias-can-be-rational-brian-hedden]]></link><comments><![CDATA[https://www.kevindorst.com/stranger_apologies/hindsight-bias-can-be-rational-brian-hedden#comments]]></comments><pubDate>Sat, 25 Apr 2020 17:49:42 GMT</pubDate><category><![CDATA[Uncategorized]]></category><guid isPermaLink="false">https://www.kevindorst.com/stranger_apologies/hindsight-bias-can-be-rational-brian-hedden</guid><description><![CDATA[(This is a guest post by&nbsp;Brian Hedden. 2400 words; 10 minute read.)It&rsquo;s now part of conventional wisdom that people are irrational in systematic and predictable ways. Research purporting to demonstrate this has resulted in at least 2 Nobel Prizes and a number of best-selling books. It&rsquo;s also revolutionized economics and the law, with potentially significant implications for public policy.&nbsp;Recently, some scholars have begun pushing back against this dominant irrationalist na [...] ]]></description><content:encoded><![CDATA[<div class="paragraph"><font color="#818181">(This is a guest post by&nbsp;<a href="http://brian-hedden.com" target="_blank">Brian Hedden</a>. 2400 words; 10 minute read.)</font><br /><br /><span style="color:rgb(42, 42, 42)">It&rsquo;s now part of conventional wisdom that people are irrational in systematic and predictable ways. Research purporting to demonstrate this has resulted in at least 2 Nobel Prizes and a number of best-selling books. It&rsquo;s also revolutionized economics and the law, with potentially significant implications for public policy.&nbsp;</span><br /><br /><span style="color:rgb(42, 42, 42)">Recently, some scholars have begun pushing back against this dominant irrationalist narrative. Much of the pushback has come from philosophers, and it has come by way of questioning the normative models of rationality assumed by the irrationalist economists and psychologists.&nbsp;</span><a href="https://philpapers.org/rec/KELSCR" target="_blank">Tom Kelly has argued</a><span style="color:rgb(42, 42, 42)">&nbsp;that sometimes, preferences that appear to constitute committing the sunk cost fallacy should perhaps really be regarded as perfectly rational preferences concerning the narrative arc of one&rsquo;s life and projects.&nbsp;</span><a href="https://philpapers.org/rec/NEBSQB" target="_blank">Jacob Nebel has argued</a><span style="color:rgb(42, 42, 42)">&nbsp;that status quo bias can sometimes amount to a perfectly justifiable conservatism about value. And Kevin Dorst has argued that&nbsp;</span><a href="https://phenomenalworld.org/analysis/why-rational-people-polarize" target="_blank">polarization</a><span style="color:rgb(42, 42, 42)">&nbsp;and the&nbsp;</span><a href="https://philpapers.org/rec/DOROIO" target="_blank">overconfidence effect</a><span style="color:rgb(42, 42, 42)">&nbsp;might be perfectly rational responses to ambiguous evidence.&nbsp;</span><br /><br /><span style="color:rgb(42, 42, 42)">In this post, I&rsquo;ll explain&nbsp;</span><a href="https://philpapers.org/rec/HEDHBI" target="_blank">my own work</a><span style="color:rgb(42, 42, 42)">&nbsp;pushing back against the conclusion that humans are predictably irrational in virtue of displaying so-called&nbsp;</span><em style="color:rgb(42, 42, 42)">hindsight bias</em><span style="color:rgb(42, 42, 42)">. Hindsight bias is the phenomenon whereby knowing that some event actually occurred leads you to give a higher estimate of the degree to which that event&rsquo;s occurrence was supported by the evidence available beforehand. I argue that not only is hindsight bias often not irrational; sometimes it&rsquo;s even rationally required, and so failure to display hindsight bias would be irrational.</span><br /></div>  <div>  <!--BLOG_SUMMARY_END--></div>  <div class="paragraph">Of course, the fact that hindsight bias is compatible with&ndash;&ndash;and sometimes even entailed by&ndash;&ndash;models of ideal rationality doesn&rsquo;t mean that we humans are rational when we display hindsight bias. We might go too far, or base our hindsight bias on bad reasons. And so I&rsquo;ll close by considering how we might attempt to test whether, when actual humans display hindsight bias, or other alleged biases for that matter, we are doing so in the rational way I identify or are instead doing so in some other irrational way.&nbsp;<br /><br /><u><span><strong>Hindsight Bias&nbsp;</strong></span></u><br />Consider a medical case. The doctor had evidence consisting of X-rays and blood samples, and on that basis had to reach a conclusion about how likely it was that the patient had a tumour. Hindsight bias means that you&rsquo;ll estimate <em>that very evidence</em>, the evidence that was available <em>ex ante</em>, as more strongly supporting the existence of a tumour if you know that the tumour&rsquo;s existence was later confirmed than if you don&rsquo;t.&nbsp;<br /><br />Or consider the case of an accident. The railroad corporation had evidence concerning the safety of its tracks, including a variety of models and records of track inspections. Hindsight bias means that you&rsquo;ll estimate that evidence as supporting a higher probability of train derailment if you know that a train in fact derailed than if you don&rsquo;t.&nbsp;<br /><br />It&rsquo;s tempting to think that the judgments you give when you have the benefit of hindsight are irrational and tend to overestimate the degree to which the <em>ex ante</em> evidence supported the relevant hypothesis &ndash; that the patient had a tumour, or that a train would derail. The idea is that it&rsquo;s somehow unfair to take into account evidence that wasn&rsquo;t available <em>ex ante</em> (namely, evidence that the hypothesis is in fact true) and have it influence your judgment about the upshot of the <em>ex ante</em> evidence. Hindsight is 20/20, but it&rsquo;s irrational to thereby think that foresight should have been 20/20 too.&nbsp;<br /><br />After all, evidence can be misleading or ambiguous, so why not just think that that&rsquo;s what happened here? That is, why not think that the <em>ex ante</em> evidence didn&rsquo;t strongly support the diagnosis of a tumour or the prediction of a train derailment, even though that diagnosis and prediction would have in fact been correct?<br /><br />&#8203;As these cases suggest, hindsight bias is of particular importance in the context of negligence lawsuits. In such cases, factfinders need to determine whether the defendants (the doctor or the railroad corporation) took reasonable steps in light of the evidence they had available. If hindsight bias leads those factfinders to overestimate the degree to which the <em>ex ante</em> evidence suggested a need for action (action which wasn&rsquo;t taken by the defendants), then it will likewise lead them to tend to judge the defendants negligent when in fact they weren&rsquo;t.&nbsp;<br /><br /><u><span><strong>Lower-Order Evidence</strong></span></u><br />Above I gave a quick-and-dirty argument for thinking that hindsight bias is irrational. Suppose that, before learning whether H was true, you rated the <em>ex ante</em> evidence as not very strongly supporting H. (In jargon, your expectation of the degree to which that evidence supported H was low.) Then you learn that H is true. Well, evidence can be misleading, and so the fact that H is in fact true still leaves open the possibility that the <em>ex ante</em> evidence didn&rsquo;t strongly support H. So perhaps you shouldn&rsquo;t change your view about how strongly the <em>ex ante</em> evidence supported H and should instead just conclude that flukes happen.&nbsp;<br /><br />This is a bad argument. We can see that it&rsquo;s a bad argument by considering an analogous case involving coin flips (epistemologists love coin flips!). Suppose that, before tossing a mystery coin of unknown bias, you think it&rsquo;s probably biased towards tails, though you also think there&rsquo;s some chance that it&rsquo;s biased towards heads. (In jargon, your expectation of the coin&rsquo;s bias towards heads is below 0.5.)&nbsp;<br /><br />Then you toss it and see it land heads. Should you change your expectation of the bias of the coin&rsquo;s bias towards heads? Reasoning analogous to the above would say &lsquo;no.&rsquo; After all, even if the coin is biased towards tails, that still leaves open the possibility that the coin would land heads; so instead of raising your expectation of the coin&rsquo;s bias towards heads, you should still to your original view and just conclude that this toss was a fluke.&nbsp;<br /><br />But that&rsquo;s clearly wrong! Any Bayesian will tell you that upon seeing the coin land heads, you should increase your expectation of the coin&rsquo;s bias towards heads. Here&rsquo;s why. The conditional probability of heads given that the coin is biased towards heads is higher than the unconditional probability of heads (that is, the probability of heads given your antecedent views about the coin): P(H|Heads-Biased)&gt;P(H). And positive probabilistic relevance is symmetric, meaning that from the above inequality it follows that P(Heads-Biased|H)&gt;P(Heads-Biased). Similarly, the conditional probability of heads given that the coin is biased towards tails is lower than the unconditional probability of heads: P(H|Tails-Biased)&lt;P(H). It follows by symmetry that P(Tails-Biased|H)&lt;P(Tails-Biased). So upon learning H (i.e. upon seeing the coin land heads), you should be more confident that the coin is biased towards heads and less confident that it is biased towards tails.&nbsp;<br /><br />The analogy is clear: Hypotheses about the bias of the coin are like hypotheses about <em>ex ante</em> evidential support. And seeing how the coin landed is like gaining the benefit of hindsight. Just as seeing the coin land heads should increase your expectation of the coin&rsquo;s bias towards heads, so learning that some other hypothesis is true should increase your expectation of the degree to which the <em>ex ante</em> evidence supported that hypothesis.&nbsp;<br /><br />Now, you might think that the coin flip case is importantly different from the case of hindsight bias. Hypotheses about the bias of the coin are contingent and knowable only <em>a posteriori</em>. But hypotheses about evidential support (or, at least, hypothesis about <em>fundamental </em>evidential support, i.e. about how strongly some body of evidence on its own supports some hypothesis) are necessary and knowable <em>a priori</em>. So while you can&rsquo;t come to know the bias of a coin just by thinking really hard about the coin, you can come to know the how strongly some body of evidence supports some hypothesis just by thinking really hard about that evidence. Ideally rational agents might be uncertain about the bias of some coin, but they won&rsquo;t ever be uncertain about how strongly any body of evidence supports any given hypothesis.&nbsp;<br /><br />That&rsquo;s a natural picture, but it&rsquo;s now widely rejected by epistemologists. Even if the fundamental evidential support facts are necessary, ideally rational agents can and often should be uncertain about them. So when two people examine the same exact evidence and come to different and conflicting views about some hypothesis, neither should be certain about how strongly that evidence supported the hypothesis, even if one of them happened to have initially judged things correctly. And this higher-order uncertainty about the fundamental evidential support facts isn&rsquo;t inert; instead, it should impact your views about first-order matters. Becoming more confident that your evidence in fact supports H should in general make you more confident in H itself. (Note, however, that it&rsquo;s a <em>very</em> tricky issue how exactly this should go.) So if you initially judged that the evidence supported low confidence in H, but then you learn that your peer thought it supported high confidence in H, you should yourself increase your confidence that the evidence supported high confidence in H, and you should increase your confidence in H itself on that basis.&nbsp;<br /><br />On this picture, the case of the coins is in fact analogous to the case of hindsight bias. It&rsquo;s a picture on which higher-order evidence and higher-order uncertainty (that is, evidence and uncertainty concerning relations of evidence support) have first order consequences, just as evidence and uncertainty about the bias of a coin has consequences for your views about how it will land. Given that, the symmetry of positive probabilistic relevance means that learning those first-order facts has higher order consequences for your views about evidential support, just as learning how the coin lands has consequences for your views about its bias.&nbsp;<br /><br />Higher-order evidence, and how to respond to it, is a hot topic in epistemology these days. I&rsquo;ve shown that recognising the importance of higher-order evidence, and its obverse, lower-order evidence, allows us to see that so-called hindsight bias is often perfectly rational. And the significance of higher-order evidence may help to cast other alleged biases in a new light, sometimes showing that they too can be perfectly rational.&nbsp;<br /><br /><u><span><strong>Testing for Irrationality</strong></span></u><br />While I claim that hindsight bias can be perfectly rational, I don&rsquo;t claim that humans are always (or even usually, or often) perfectly rational when <em>they</em> display hindsight bias. Indeed, my own suspicion is that real-world hindsight bias is still often irrational. Humans may go &lsquo;too far&rsquo; with their hindsight bias, using hindsight to change their views about the upshot of the <em>ex ante</em> evidence to a greater extent than is warranted by the above analysis. And even when humans display just the right &lsquo;amount&rsquo; of hindsight bias, they may do so for bad reasons. Rather than displaying hindsight bias as a rational response to lower-order evidence, they may do so on the basis of evidentially irrelevant motivational factors like the need for closure or the desire to see themselves as experts.&nbsp;<br /><br />Much the same goes for other alleged biases that have been defended by philosophers. Tom Kelly, for example, argues that some instances of the sunk cost fallacy may in fact be based on perfectly rational &lsquo;redemptive preferences&rsquo; whereby the agent is concerned with the narrative arc of her life, preferring a life filled with projects seen through to fruition rather than a bunch of false starts. But he doesn&rsquo;t claim that <em>all</em> instances of this sort of behaviour are rational in this way.&nbsp;<br /><br />Could we run tests to determine the extent to which humans display hindsight bias or sunk cost reasoning <em>irrationally</em>? I am not sure, but I think it would be very difficult and require rather intricate experimental design, not to mention lots of theoretical work to get the normative models right.&nbsp;<br /><br />There are two reasons for my scepticism. First, if we assume that it&rsquo;s <em>always</em> irrational to display some pattern of behaviour to <em>any</em> extent, then things are easy to test. If we find that people ever display that behaviour, then that&rsquo;s it &ndash; end of story. In the case of hindsight bias, if we assume that your judgments about the upshot of the <em>ex ante</em> evidence shouldn&rsquo;t differ <em>at all</em> depending on whether you know what wound up happening, then if we detect any such difference, no matter how small (provided it&rsquo;s still statistically significant), then that suffices to show that you&rsquo;re irrational. There&rsquo;s a sharp, empirically detectable distinction between &lsquo;no difference&rsquo; and &lsquo;some difference.&rsquo; But if we assume that your judgments should differ to some degree depending on whether you know what wound up happening, then we can&rsquo;t determine whether they differ &lsquo;too much&rsquo; unless we know exactly how much they should differ. But I think it might well be impossible to say &lsquo;how much&rsquo; hindsight bias you should display in any realistic case.&nbsp;<br /><br />Second, we might try to determine whether humans display hindsight bias irrationally by determining whether they do so on the basis of good reasons or bad. Do they display hindsight bias in virtue of responding to higher- and lower-order evidence, or on the basis of a need for closure and self-esteem? We might try just asking them for their reasons, but we should be very sceptical about people&rsquo;s capacity to introspectively determine their own motivations. Perhaps some more complex experimental designs could help determine their motivations, or at least ensure that any bias they display couldn&rsquo;t be on the basis of the good reasons that I identify. There may be no in-principle reason this couldn&rsquo;t be done, but at the same time it&rsquo;s certainly no easy task.&nbsp;<br /><br /><br />What next?<br /><strong>If you want to hear more about the details</strong>, check out Brian's paper, "<a href="https://academic.oup.com/analysis/article-abstract/79/1/43/5032779" target="_blank">Hindsight Bias is Not a Bias</a>."<br /><strong>If you want to learn more about the empirical work</strong>&nbsp;<strong>on hindsight bias</strong>, see <a href="https://psycnet.apa.org/record/1976-00159-001" target="_blank">this seminal paper</a> by Baruch Fischoff, as well as <a href="https://journals.sagepub.com/doi/abs/10.1177/1745691612454303?journalCode=ppsa" target="_blank">this recent overview</a> of the literature.<br /><strong>For more on the importance of normative models psychological research</strong>, see <a href="https://www.ucl.ac.uk/lagnado-lab/publications/harris/Hahn_Harris_L&amp;M2014.pdf" target="_blank">this paper</a> by Ulrike Hahn and Adam Harris on motivated reasoning.</div>]]></content:encoded></item><item><title><![CDATA[A Glass Half Full? (Guest Post by Sarah Fisher)]]></title><link><![CDATA[https://www.kevindorst.com/stranger_apologies/a-glass-half-full]]></link><comments><![CDATA[https://www.kevindorst.com/stranger_apologies/a-glass-half-full#comments]]></comments><pubDate>Sat, 11 Apr 2020 12:47:15 GMT</pubDate><category><![CDATA[Framing effects]]></category><guid isPermaLink="false">https://www.kevindorst.com/stranger_apologies/a-glass-half-full</guid><description><![CDATA[(This is a guest post by Sarah Fisher. 2000 words; 8 minute read.)  We could all do with imagining ourselves into a different situation right now. For me, it would probably be a sunny caf&eacute;, with a coffee and a delicious pastry in front of me&ndash;&ndash;bliss. Here&rsquo;s another scenario that seems ever more improbable as time goes by (remember when we played and watched sports&hellip;?!):      Imagine that you are a recruiter for a college basketball team. Your job is to search for pr [...] ]]></description><content:encoded><![CDATA[<div class="paragraph"><font color="#a1a1a1">(This is a guest post by <a href="https://sites.google.com/view/sarahafisher/home" target="_blank">Sarah Fisher</a>. 2000 words; 8 minute read.)</font></div>  <div class="paragraph"><span>We could all do with imagining ourselves into a different situation right now. For me, it would probably be a sunny caf&eacute;, with a coffee and a delicious pastry in front of me&ndash;&ndash;bliss. Here&rsquo;s another scenario that seems ever more improbable as time goes by (remember when we played and watched sports&hellip;?!):</span></div>  <div>  <!--BLOG_SUMMARY_END--></div>  <blockquote><span>Imagine that you are a recruiter for a college basketball team. Your job is to search for promising high school basketball players and try to recruit them to your college. You are looking through files for players from local high schools, and you are especially interested in players who can score many points.<br />&#8203;</span><br /><span>The file you are currently looking at shows a player whose performance is quite unusual. This player made 40% of his shots last season.&nbsp;</span><span>How valuable do you think this player would be to your basketball team?</span></blockquote>  <div class="paragraph"><span>So, what do you think? Whereabouts would you rate the player on the scale below?</span><br /><span></span></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/20-4-11-fisher-framing-1_orig.png" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph"><span>In <a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/bdm.2030"><span style="color:#0b4cb4">a recent psychology experiment</span></a>, this exact scenario was presented to a group of students at the University of California, San Diego. On average, they rated the player about here:</span></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/20-4-11-fisher-framing-2-1_orig.png" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph"><span>The experimenters gave the same scenario to another group of students, except this time they changed one detail. Instead of describing the player as having <em>made 40% </em>of his shots last season, they described him as having <em>missed 60%.&nbsp;<br />&#8203;</em></span><br /><span>Seems like no big deal. After all, missing 60% is exactly the same as making 40%, isn&rsquo;t it? These are just two ways of saying the same thing. Presumably, then, it didn&rsquo;t make a &nbsp;difference to how the students responded. Right?&nbsp;</span><br /><br /><span>In fact, the average rating dropped to about here:</span></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/20-4-11-fisher-framing-3-1_orig.png" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph"><span>Not a dramatic difference, but a big&nbsp;enough one that it's unlikely to have happened by chance. It looks as though changing the wording from &lsquo;made&rsquo; to &lsquo;missed&rsquo; genuinely affects how people judge the player.&nbsp;<br />&#8203;</span><br /><span>This is an example of a <strong>framing effect</strong>. (In particular, it is known as an &lsquo;attribute framing effect&rsquo;, according to the typology developed <a href="https://www.sciencedirect.com/science/article/abs/pii/S0749597898928047"><span style="color:#0b4cb4">here</span></a>.) Psychologists have been interested in framing effects since Amos Tversky and Daniel Kahneman first <a href="https://science.sciencemag.org/content/211/4481/453"><span style="color:#0b4cb4">brought them to light</span></a> about forty years ago. Many experiments conducted since then show that most of us are susceptible to such effects most of the time (see <a href="https://www.sciencedirect.com/science/article/abs/pii/S0749597898928047"><span style="color:#0b4cb4">here</span></a> for a survey of the first twenty years of framing research).</span><br /><br /><span>So, it turns out that framing the same information in different words can lead people to draw different conclusions. Huh.</span><br /><br /><span>Now, there&rsquo;s a sense in which this is not surprising at all. We&rsquo;re pretty well-attuned to these kinds of effects. In fact, you probably sensed straight away that switching from talking about shots <em>made</em> to shots <em>missed</em> would inevitably make the player sound worse. (As you can imagine, it&rsquo;s useful to know about this effect when you&rsquo;re trying to be persuasive &ndash; if you keep an eye out, you&rsquo;ll notice how advertisers and politicians, among others, deliberately frame their messages in particular ways).&nbsp;</span><br /><br /><span>But <em>why</em> do framing effects happen? How can it make a difference which words are used, if they convey exactly the same information? And, if framing effects seem obvious and familiar in some sense, why don&rsquo;t we see past them? Are we being <em>irrational</em>?&nbsp;</span><br /><br /><span>Many have taken framing effects to be a paradigmatic example of human irrationality. Kahneman has a chapter on them in his classic book about irrational decision-making, <a href="https://www.penguin.co.uk/books/563/56314/thinking--fast-and-slow/9780141033570.html"><span style="color:#0b4cb4"><em>Thinking Fast and Slow</em></span></a>.&nbsp;&nbsp;&nbsp; &nbsp;</span><br /><br /><span>I&rsquo;m going to outline one way of challenging that view (and I hope to cover some others in a later blog post). On the alternative view, framing effects are a symptom of our sensitivity to subtle linguistic cues, and of our use of these cues to draw very reasonable conclusions.</span><br /><br /><span>Before getting to that, let&rsquo;s first consider why framing effects are so commonly thought to reveal irrationality. Think again about the case of the basketball player. Remember that he is judged more valuable when the focus is on the proportion of shots he <em>made</em> than when the focus is on the corresponding proportion he <em>missed</em>. It seems as though our judgements are not tracking how well or badly he actually performed&ndash;&ndash;the actual proportions he made and&ndash;&ndash;instead, our judgements are merely tracking language. We are swayed by the particular words used to convey the information. But those words, presumably, are merely superficial, surface phenomena, which make no difference to the underlying information they are used to convey. To invoke an analogy, it&rsquo;s a bit like deciding how much you like a gift purely on the basis of whether the wrapping paper is red or blue.&nbsp;</span><br /><br /><span>Going back to our earlier example, it seems clear, on reflection, that the basketball player&rsquo;s performance is the same whichever way it is described. Presumably, then, if we were perfectly rational, we should rate him similarly under each description.&nbsp;</span><br /><br /><span>This is why many psychologists have treated our susceptibility to framing effects as a kind of reasoning error. They assume that the differences in language are irrelevant. And, therefore, we are irrational if we allow these to affect our judgements. Summing up his chapter on &lsquo;Frames and Reality&rsquo;, Kahneman writes:</span></div>  <blockquote><span>As we have seen, again and again, an important choice is controlled by an utterly inconsequential feature of the situation. This is embarrassing &ndash; it is not how we would wish to make important decisions.</span><br /><span></span></blockquote>  <div class="paragraph">Why might we make such an error? Kahneman and others have suggested that positive or negative language (as when we hear that a basketball player &lsquo;made&rsquo; or &lsquo;missed&rsquo; shots) can colour our judgement. This is thought to happen purely <em>by association</em>, perhaps as an <em>emotional response</em> that bypasses our reasoning capacities.&nbsp;<br />&#8203;<br />This interpretation implies that we process information fairly shallowly, just responding to surface features (the particular words used) and not thinking through the underlying facts (what&rsquo;s actually being <em>said</em> with these words).&nbsp;<br /><br />Recently, though, a different view of framing effects has begun to emerge. Several theorists have argued that we are right to respond differently to alternative frames. The advocates of this view aim to provide a <em>rationalising </em>explanation of framing effects.&nbsp;<br /><br />The idea I&rsquo;ll focus on here is that frames &lsquo;leak&rsquo; information beyond what is explicitly conveyed &ndash; a proposal developed <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.176.349&amp;rep=rep1&amp;type=pdf"><span style="color:#0b4cb4">here</span></a>. The proponents agree that the basketball player made 40% of his shots and missed 60%, regardless of which frame is used. But, it is argued, by choosing to describe the player as having &lsquo;made 40%&rsquo;, some extra information is <em>implicitly </em>conveyed.&nbsp;<br /><br />What is this extra information?&nbsp;<br /><br />Well, think about how the frame affects your expectations about <em>typical</em> high school players. Do you expect them to make more or less than 40%? How does that change if I tell you the player &lsquo;<em>only </em>made 40%&rsquo;?&nbsp;<br /><br />According to the <strong>reference point hypothesis</strong>, describing the player as having &lsquo;made 40%&rsquo; conveys that the player made <em>relatively many </em>shots &ndash; compared, say, to the average high school player. On the flipside, saying that the player &lsquo;missed 60%&rsquo; of his shots conveys the <em>opposite </em>information &ndash; that the player <em>missed</em> relatively many shots.&nbsp;<br /><br />As you might have spotted, it&rsquo;s possible to switch these expectations with &lsquo;only&rsquo;. When we hear that the player &lsquo;<em>only</em> made 40%&rsquo; we expect that he should have made more. But when we hear that he &lsquo;<em>only</em> missed 60%&rsquo; it sounds like he&rsquo;s a relatively high scorer.<br /><br />Now, if this theory is right, it would explain why the player is judged to be better under the &lsquo;40% made&rsquo; frame than under the &lsquo;60% missed&rsquo; frame. Under the &lsquo;40% made&rsquo; frame, he is assumed to be a high performer, making more shots than average. In contrast, under the &lsquo;60% missed&rsquo; frame, the player is assumed to be a poor performer, making fewer shots than average.&nbsp;<br /><br />Importantly, it turns out that we&rsquo;re right to make these assumptions. The extra inferences we make are actually tracking how speakers choose their words (as shown by studies reported <a href="https://link.springer.com/article/10.3758/BF03196520"><span style="color:#0b4cb4">here</span></a>, <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.176.349&amp;rep=rep1&amp;type=pdf"><span style="color:#0b4cb4">here</span></a>, <a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/bdm.502"><span style="color:#0b4cb4">here</span></a>, <a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/bdm.2030"><span style="color:#0b4cb4">here</span></a>, and <a href="https://journals.sagepub.com/doi/10.1080/17470218.2016.1225779"><span style="color:#0b4cb4">here</span></a>). For example, when a basketball player made more shots than average, a speaker is more likely to talk about how many shots he &lsquo;made&rsquo; rather than how many he &lsquo;missed&rsquo;. In other words, the speaker&rsquo;s choice about whether to use &lsquo;made&rsquo; or &lsquo;missed&rsquo; is a pretty reliable guide to how well the player performed. And it&rsquo;s perfectly reasonable for our evaluation of the basketball player to be sensitive to this. On the &lsquo;information leakage&rsquo; account, then, the framing effect is perfectly rational. Going back to gift-wrapping analogy, if people tend to wrap better gifts in red paper, then you <em>should </em>pay attention to the colour of the paper!<br /><br />It&rsquo;s worth emphasising that, according to the &lsquo;information leakage&rsquo; account, we fully process the information about how many shots the basketball player made and missed. It is just that we also make an extra inference about whether that represents relatively good or bad performance. In this way, framing effects are seen as evidence of <em>deeper</em>, not <em>shallower</em>, processing.&nbsp;<br /><br />Suppose this is the right analysis. That raises a question: what is the status of the additional information &lsquo;leaked&rsquo; by frames. Is it part of the meaning of our words, or something we work out using non-linguistic information?<br /><br />In <a href="https://drive.google.com/file/d/13OCS38T51Au2oV0EYjnsc8TXt4CFy8Bl/view"><span style="color:#0b4cb4">this draft paper</span></a>, I argue that the information should be captured under the broad umbrella of pragmatics, as &lsquo;conversational implicatures&rsquo;. These are all kinds of extra bits of information, which we glean by &lsquo;reading between the lines&rsquo; of what people explicitly say. The concept of an implicature was first introduced in a classic philosophy paper by Grice on &lsquo;Logic and Conversation&rsquo; (published in a collection of his work <a href="https://www.hup.harvard.edu/catalog.php?isbn=9780674852716"><span style="color:#0b4cb4">here</span></a>).<br /><br />Here&rsquo;s an example: imagine you ask me whether I read your article. I report to you: &lsquo;I did put the article in front of me and I moved my eyes from left to right along each line&rsquo;. You&rsquo;ll probably infer that I didn&rsquo;t actually read it &ndash; perhaps I was bored or distracted. But why do you draw that conclusion? After all, reading (in English) usually involves precisely the process I described.&nbsp;<br /><br />The answer is that I should simply have said &lsquo;yes&rsquo; if that&rsquo;s what I meant. By replying in such a convoluted way, I implicitly convey that, although I made some attempt to read the article, I didn&rsquo;t manage to do so. This information is known as an &lsquo;implicature&rsquo; of my utterance.<br /><br />I suggest that framing works a bit like this. If I tell you that a basketball player &lsquo;missed 60%&rsquo; of his shots, you&rsquo;ll probably infer that he&rsquo;s a poor performer. Why? Because I could have said he &lsquo;made 40%&rsquo; but I chose not to do so. Again, what I chose <em>not</em> to say speaks volumes.<br /><br />If my analysis is right, it suggests that a rational explanation of framing effects was always available, we just needed to appreciate Grice&rsquo;s insight that the information we communicate to each other can far exceed what we say explicitly. In particular, a speaker&rsquo;s choice of frame carries implicit information that justifies our sensitivity to what initially seems to be an entirely inconsequential feature of the utterance.<br /><br />If the case of the basketball player is anything to go by, we shouldn&rsquo;t automatically jump to the conclusion that framing effects are irrational. We have good reasons to judge the player differently under each frame. Having said that, the research on framing is complex and wide-ranging (sometimes overwhelmingly so!). Various other studies look at the framing of choices, questions, policies, and goals. Some of these investigate not just how information is phrased, but also the order in which it is presented, whether it is in spoken or written form, and all sorts of other possibilities. Different lines of research explore whether someone&rsquo;s susceptibility to framing effects depends on their age, gender, cognitive ability, or even their political affiliation. It remains an open question whether all the findings in this vast literature can be given a rational explanation. I don&rsquo;t know the answer to that. But, for now, I&rsquo;m keeping the glass half full!<br /><br /><br />What next?<br /><strong>If you want to read more about rational interpretations of framing effects</strong>, check out&nbsp;<a href="https://sites.google.com/view/sarahafisher/home" target="_blank">Sarah's website</a>,&nbsp;this <a href="http://datacolada.org/wp-content/uploads/2013/12/Mandel-2013.pdf" target="_blank">2013 paper</a> by David Mandel, and stay tuned for future posts.</div>]]></content:encoded></item><item><title><![CDATA[Pandemic Polarization is Reasonable]]></title><link><![CDATA[https://www.kevindorst.com/stranger_apologies/pandemic-polarization-is-reasonable]]></link><comments><![CDATA[https://www.kevindorst.com/stranger_apologies/pandemic-polarization-is-reasonable#comments]]></comments><pubDate>Sat, 21 Mar 2020 10:50:22 GMT</pubDate><category><![CDATA[Polarization]]></category><guid isPermaLink="false">https://www.kevindorst.com/stranger_apologies/pandemic-polarization-is-reasonable</guid><description><![CDATA[(2500 words; 11 minute read.)Last week I had back-to-back phone calls with two friends in the US. The first told me he wasn&rsquo;t too worried about Covid-19 because the flu already is a pandemic, and although this is worse, it&rsquo;s not that much worse. The second was&mdash;as he put it&mdash;at &ldquo;DEFCON 1&rdquo;: preparing for the possibility of a societal breakdown, and wondering whether he should buy a gun.I bet this sort of thing sounds familiar. People have had very different react [...] ]]></description><content:encoded><![CDATA[<div class="paragraph"><font color="#818181" size="2">(2500 words; 11 minute read.)</font><br /><br />Last week I had back-to-back phone calls with two friends in the US. The first told me he wasn&rsquo;t too worried about Covid-19 because the flu already <em>is</em> a pandemic, and although this is worse, it&rsquo;s not <em>that</em> much worse. The second was&mdash;as he put it&mdash;at &ldquo;DEFCON 1&rdquo;: preparing for the possibility of a societal breakdown, and wondering whether he should buy a gun.<br /><br />I bet this sort of thing sounds familiar. People have had <em>very</em> different reactions to the pandemic. Why? And equally importantly: what are we to make of such differences?<br /><br />The question is political. Though things are changing fast, there <a href="https://abcnews.go.com/Politics/coronavirus-upends-nation-americans-lives-changed-pandemic-poll/story?id=69696172https://abcnews.go.com/Politics/coronavirus-upends-nation-americans-lives-changed-pandemic-poll/story?id=69696172">remains</a> a substantial partisan divide in people&rsquo;s reactions: for example, <a href="https://www.npr.org/2020/03/17/816501871/poll-as-coronavirus-spreads-fewer-americans-see-pandemic-as-a-real-threat?t=1584546778190">one poll</a> from early this week found that 76% of Democrats saw Covid as a &ldquo;real threat&rdquo;, compared to only 40% of Republicans (continuing <a href="https://fivethirtyeight.com/features/how-concerned-are-americans-about-coronavirus-so-far/">the previous week</a>&rsquo;<a href="https://fivethirtyeight.com/features/how-concerned-are-americans-about-coronavirus-so-far/">s trend</a>).<br /><br />What are we to make of this &ldquo;pandemic polarization&rdquo;? Must Democrats attribute partisan-motivated complacency to Republicans, or Republicans attribute partisan-motivated panic to Democrats?&nbsp;<br /><br />I&rsquo;m going to make the case that the answer is <em>no</em>: there is a simple, rational process that can drive these effects. Therefore we needn&rsquo;t&mdash;I&rsquo;d say <em>shouldn&rsquo;t</em>&mdash;take the differing reaction of the &ldquo;other side&rdquo; as <a href="https://www.kevindorst.com/stranger_apologies/plea_for_political_empathy">yet another reason to demonize them</a>. &nbsp;</div>  <div>  <!--BLOG_SUMMARY_END--></div>  <div class="paragraph"><u style="color:rgb(42, 42, 42)"><strong>Motivated Reasoning?</strong>&nbsp;&nbsp;</u><br /><span style="color:rgb(42, 42, 42)">In order to know how to assess these differing reactions, we need to have a sense of their causes. Of course, we won&rsquo;t be able to work out the&nbsp;</span><em style="color:rgb(42, 42, 42)">whole&nbsp;</em><span style="color:rgb(42, 42, 42)">story&mdash;the process is far too complicated for that&mdash;but we can highlight some potential driving factors.</span><br /><br /><span style="color:rgb(42, 42, 42)">So what might have led to the partisan divide about Covid? A natural first reaction is to appeal to &ldquo;</span><a href="https://en.wikipedia.org/wiki/Motivated_reasoning">motivated reasoning</a><span style="color:rgb(42, 42, 42)">&rdquo;&mdash;the (</span><a href="https://www.ucl.ac.uk/lagnado-lab/publications/harris/Hahn_Harris_L&amp;M2014.pdf">putative</a><span style="color:rgb(42, 42, 42)">) tendency people have to reason their way to their preferred conclusions. Maybe Republicans are (relatively) unconcerned because they&nbsp;</span><em style="color:rgb(42, 42, 42)">want</em><span style="color:rgb(42, 42, 42)">&nbsp;the situation to okay for the status quo, whereas Democrats are concerned because they want it not to be.</span><br /><br /><span style="color:rgb(42, 42, 42)">This is one of those explanations that sounds good from 10,000 feet, but loses steam as soon as you try to apply it consistently. Ask yourself: why do&nbsp;</span><em style="color:rgb(42, 42, 42)">you</em><span style="color:rgb(42, 42, 42)">&nbsp;feel the way you do about Covid? I suspect the answer will be: &ldquo;Because I&rsquo;ve followed the facts, listened to experts I trust, and discussed it with my friends and peers. In light of that, my attitude seems reasonable.&rdquo;&nbsp; And I&rsquo;m willing to bet the answer will&nbsp;</span><em style="color:rgb(42, 42, 42)">not</em><span style="color:rgb(42, 42, 42)">&nbsp;be: &ldquo;Because I wanted this to be bad (good) news for the status quo, so I convinced myself that it was.&rdquo;</span><br /><br /><span style="color:rgb(42, 42, 42)">Okay, so&nbsp;</span><em style="color:rgb(42, 42, 42)">your</em><span style="color:rgb(42, 42, 42)">&nbsp;opinion wasn&rsquo;t driven by motivated reasoning. What about others&rsquo; opinions&mdash;in particular, those who&nbsp;</span><em style="color:rgb(42, 42, 42)">agree</em><span style="color:rgb(42, 42, 42)">&nbsp;with you about Covid?&nbsp; Why do you think&nbsp;</span><em style="color:rgb(42, 42, 42)">they</em><span style="color:rgb(42, 42, 42)">&nbsp;have their beliefs? I suspect your answer will be that they too formed a reasonable opinion in light of their evidence. (If your answer were &ldquo;motivated reasoning&rdquo;, then you should be worried that the same reasoning explains how you reached the same conclusion!)</span><br /><br /><span style="color:rgb(42, 42, 42)">At this point you&rsquo;ve concluded that motivated reasoning isn&rsquo;t the (primary) explanation of the attitudes of the people who&nbsp;</span><em style="color:rgb(42, 42, 42)">agree&nbsp;</em><span style="color:rgb(42, 42, 42)">with you about Covid. And that means that if you still think that motivated reasoning caused the pandemic polarization, you have to think it was the&nbsp;</span><em style="color:rgb(42, 42, 42)">other side</em><span style="color:rgb(42, 42, 42)">&rsquo;s motivated reasoning.</span><br /><br /><span style="color:rgb(42, 42, 42)">But if you say that, you&rsquo;ve moved from well-grounded psychology to groundless partisan bashing. For as&nbsp;</span><a href="https://www.ucl.ac.uk/lagnado-lab/publications/harris/Hahn_Harris_L&amp;M2014.pdf">controversial</a><span style="color:rgb(42, 42, 42)">&nbsp;as&nbsp;</span><a href="https://slate.com/health-and-science/2018/01/weve-been-told-were-living-in-a-post-truth-age-dont-believe-it.html">motivated reasoning is</a><span style="color:rgb(42, 42, 42)">, the one thing all sides agree on is that insofar as people are disposed to do it,&nbsp;</span><a href="https://www.theatlantic.com/magazine/archive/2011/12/i-was-wrong-and-so-are-you/308713/">we are all&nbsp;<em>equally</em>&nbsp;disposed to do it</a><span style="color:rgb(42, 42, 42)">.</span><br /><br /><span style="color:rgb(42, 42, 42)">Upshot: if you&rsquo;re convinced that motivated reasoning is the explanation for pandemic polarization, then you must conclude that it applies equally to everyone&mdash;including yourself. But you can&rsquo;t reasonably think, for example, &ldquo;Covid is a serious crisis&mdash;but my belief in that is based on irrational motivated reasoning.&rdquo;&nbsp; (That is what philosophers call an &ldquo;</span><a href="https://plato.stanford.edu/entries/epistemic-self-doubt/">akratic</a><span style="color:rgb(42, 42, 42)">&rdquo; state.)&nbsp;</span><br /><br /><span style="color:rgb(42, 42, 42)">Therefore, if you think motivated reasoning explains the divide in opinions about Covid, you must think that it also explains&nbsp;</span><em style="color:rgb(42, 42, 42)">your</em><span style="color:rgb(42, 42, 42)">&nbsp;controversial opinions about Covid&mdash;which in turn means that you must give up those opinions.</span><br /><br /><span style="color:rgb(42, 42, 42)">Of course, you&rsquo;re not going to do that&mdash;and I&rsquo;m not arguing that you should. What I&rsquo;m arguing is that if you hold onto your opinions, you can&rsquo;t explain pandemic polarization by appeal to motivated reasoning&mdash;you need a different story.</span></div>  <div class="paragraph" style="text-align:right;"><font color="#818181" size="2">(7 minutes left.)</font></div>  <div class="paragraph"><u style="color:rgb(42, 42, 42)"><strong>Group Polarization</strong></u><strong style="color:rgb(42, 42, 42)">&nbsp;</strong><br /><span style="color:rgb(42, 42, 42)">Here&rsquo;s how that story could go.</span><br /><br /><span style="color:rgb(42, 42, 42)">A hint comes from my two friends. It turns out they share a lot: political views, socioeconomic status, gender, race, hometown&mdash;you name it. They get their news from similar sources, and they both knew the same &ldquo;hard facts&rdquo; about Covid. Yet their reactions to it could not have been more different.</span><br /><br /><span style="color:rgb(42, 42, 42)">Why? Well, consider some of those hard facts. The closest familiar comparison&nbsp; to Covid is the seasonal flu, so how do the two stack up?&nbsp;</span><br /><br /><span style="color:rgb(42, 42, 42)">Covid&rsquo;s reproductive number (the number of people who a single person infects, absent social distancing) is&nbsp;</span><a href="https://www.ncbi.nlm.nih.gov/pubmed/32097725">around 2.3</a><span style="color:rgb(42, 42, 42)">, while the&nbsp;</span><a href="https://share.upmc.com/2020/03/coronavirus-vs-flu/?emb=CTA4_coronavirus-vs-flu_4&amp;et_cid=768513&amp;et_rid=1294393&amp;utm_medium=Email&amp;utm_source=ExactTarget&amp;utm_campaign=covid19&amp;utm_content=UPMC-covid19-database-email-1&amp;em_id=tr_UPMC-covid19-database-email-1_Mar-20_e1">flu</a><span style="color:rgb(42, 42, 42)">&rsquo;</span><a href="https://share.upmc.com/2020/03/coronavirus-vs-flu/?emb=CTA4_coronavirus-vs-flu_4&amp;et_cid=768513&amp;et_rid=1294393&amp;utm_medium=Email&amp;utm_source=ExactTarget&amp;utm_campaign=covid19&amp;utm_content=UPMC-covid19-database-email-1&amp;em_id=tr_UPMC-covid19-database-email-1_Mar-20_e1">s is 1.3</a><span style="color:rgb(42, 42, 42)">. Covid has a mortality rate between 0.5% and 5% (or&nbsp;</span><a href="https://www.nytimes.com/2020/03/19/health/wuhan-coronavirus-deaths.html?te=1&amp;nl=coronavirus-briefing&amp;emc=edit_cb_20200319&amp;campaign_id=154&amp;instance_id=16915&amp;segment_id=22402&amp;user_id=00377c2d3b9e184a79f025647f3b943b&amp;regi_id=84326204">maybe 1.4%</a><span style="color:rgb(42, 42, 42)">)&nbsp;</span><a href="https://medium.com/@tomaspueyo/coronavirus-act-today-or-people-will-die-f4d3d9cd99ca">depending on how overloaded the healthcare system becomes</a><span style="color:rgb(42, 42, 42)">, while the flu&rsquo;s mortality rate (in a&nbsp;</span><em style="color:rgb(42, 42, 42)">non</em><span style="color:rgb(42, 42, 42)">-overloaded system) is&nbsp;</span><a href="https://share.upmc.com/2020/03/coronavirus-vs-flu/?emb=CTA4_coronavirus-vs-flu_4&amp;et_cid=768513&amp;et_rid=1294393&amp;utm_medium=Email&amp;utm_source=ExactTarget&amp;utm_campaign=covid19&amp;utm_content=UPMC-covid19-database-email-1&amp;em_id=tr_UPMC-covid19-database-email-1_Mar-20_e1">around 0.1%</a><span style="color:rgb(42, 42, 42)">. Covid has (as of this writing)&nbsp;</span><a href="https://www.cdc.gov/coronavirus/2019-ncov/cases-updates/cases-in-us.html">around 15,000</a><span style="color:rgb(42, 42, 42)">&nbsp;confirmed cases in the U.S. (with 201 deaths), and since&mdash;especially early on&mdash;confirmed cases&nbsp;</span><a href="https://medium.com/@tomaspueyo/coronavirus-act-today-or-people-will-die-f4d3d9cd99ca">probably lag behind true cases by an order of magnitude</a><span style="color:rgb(42, 42, 42)">, that means there may now be around 150,000 cases.&nbsp;</span><a href="https://www.nytimes.com/2020/03/16/us/coronavirus-hype-overreaction-social-distancing.html?te=1&amp;nl=coronavirus-briefing&amp;emc=edit_cb_20200316&amp;campaign_id=154&amp;instance_id=16821&amp;segment_id=22293&amp;user_id=00377c2d3b9e184a79f025647f3b943b&amp;regi_id=84326204">Meanwhile</a><span style="color:rgb(42, 42, 42)">, the flu has infected as many as 45,000,000 Americans this season, and killed as many as 40,000. Since Covid has a much higher reproductive number, it has the potential to be much worse. But the flu also had an entire winter season to infect us, while Covid is hitting in the spring&mdash;and there&rsquo;s&nbsp;</span><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3551767">some (inconclusive) evidence</a><span style="color:rgb(42, 42, 42)">&nbsp;that it&rsquo;ll spread less well in the summer.&nbsp; But on the other hand, unlike the flu, we won&rsquo;t have a vaccine for Covid for&nbsp;</span><a href="https://www.nytimes.com/2020/03/16/health/coronavirus-vaccine.html?te=1&amp;nl=coronavirus-briefing&amp;emc=edit_cb_20200317&amp;campaign_id=154&amp;instance_id=16859&amp;segment_id=22332&amp;user_id=00377c2d3b9e184a79f025647f3b943b&amp;regi_id=84326204">at least a year</a><span style="color:rgb(42, 42, 42)">&nbsp;(</span><a href="https://www.technologyreview.com/s/615331/a-coronavirus-vaccine-will-take-at-least-18-monthsif-it-works-at-all/">or more</a><span style="color:rgb(42, 42, 42)">). But, on the&nbsp;</span><em style="color:rgb(42, 42, 42)">other&nbsp;</em><span style="color:rgb(42, 42, 42)">other hand, there is an&nbsp;</span><a href="https://www.economist.com/briefing/2020/03/12/understanding-sars-cov-2-and-the-drugs-that-might-lessen-its-power">unprecedented scientific focus</a><span style="color:rgb(42, 42, 42)">&nbsp;on solving this problem, with&nbsp;</span><a href="https://www.nytimes.com/2020/03/17/science/coronavirus-treatment.html?te=1&amp;nl=coronavirus-briefing&amp;emc=edit_cb_20200317&amp;campaign_id=154&amp;instance_id=16859&amp;segment_id=22332&amp;user_id=00377c2d3b9e184a79f025647f3b943b&amp;regi_id=84326204">dozens of drugs</a><span style="color:rgb(42, 42, 42)">&nbsp;in development&mdash;with some claiming to have found a potential &ldquo;</span><a href="https://www.thechronicle.com.au/news/cure-found-for-coronavirus-in-australia/3973564/">cure</a><span style="color:rgb(42, 42, 42)">&rdquo;.</span><br /><br /><span style="color:rgb(42, 42, 42)">But, but, but&hellip;.</span><br /><br /><span style="color:rgb(42, 42, 42)">You see the problem? There are&nbsp;</span><em style="color:rgb(42, 42, 42)">too many</em><span style="color:rgb(42, 42, 42)">&nbsp;hard facts about Covid! We are awash in information that we don&rsquo;t know how to interpret. How much worse is a reproductive number of 2.3 than 1.3? How reliable are those mortality numbers? How will the virus&rsquo;s spread be affected by social distancing measures?&nbsp; Even experts can&rsquo;t know, so the rest of us are surely in the dark.</span><br /><br /><span style="color:rgb(42, 42, 42)">When we&rsquo;re awash with hard-to-interpret information like this, what do we do?&nbsp; Phone a friend. We&nbsp;</span><em style="color:rgb(42, 42, 42)">talk&nbsp;</em><span style="color:rgb(42, 42, 42)">to people about it&mdash;friends, family, colleagues.&nbsp; We ask for their reactions; we share our own; we consider our differences and try to reconcile them. Covid calibration has become the warp and woof of everyday conversation.</span><br /><br /><span style="color:rgb(42, 42, 42)">In situations like this, the majority of our&nbsp;</span><em style="color:rgb(42, 42, 42)">evidence</em><span style="color:rgb(42, 42, 42)">&mdash;the information that determines what we each should think&mdash;comes not from the hard facts themselves, but from other&rsquo;s reactions to them. At a first pass: if everyone you trust says that Covid&rsquo;s reproductive number of 2.3 is not a big deal, you should think it&rsquo;s not a big deal; if they all say it is terrifying, you should think it&rsquo;s terrifying.</span><br /><br /><span style="color:rgb(42, 42, 42)">And&nbsp;</span><em style="color:rgb(42, 42, 42)">that</em><span style="color:rgb(42, 42, 42)">, of course, was the difference between my two friends. One had a social circle that was (at the time) largely nonchalant about the news; the other had one that was stirred up into a panic.&nbsp;</span><br /><br /><span style="color:rgb(42, 42, 42)">What I&rsquo;m suggesting is that&nbsp;</span><em style="color:rgb(42, 42, 42)">this</em><span style="color:rgb(42, 42, 42)">&mdash;differing initial reactions of our social circles&mdash;is the driver of pandemic polarization.</span><br /><br /><span style="color:rgb(42, 42, 42)">The core process is a well-understood one with a long history in social psychology. It&rsquo;s known as the &ldquo;</span><a href="https://en.wikipedia.org/wiki/Group_polarization">group polarization effect</a><span style="color:rgb(42, 42, 42)">&rdquo;: when people discuss their similar opinions, they have a tendency to become&nbsp;</span><em style="color:rgb(42, 42, 42)">more uniform</em><span style="color:rgb(42, 42, 42)">&nbsp;and&nbsp;</span><em style="color:rgb(42, 42, 42)">more extreme</em><span style="color:rgb(42, 42, 42)">&nbsp;in those opinions.</span><br /><br /><span style="color:rgb(42, 42, 42)">The phenomenon has been studied for decades, and has been repeatedly confirmed on a wide range of subject matters&mdash;ranging from emotionally charged issues like&nbsp;</span><a href="https://science.sciencemag.org/content/169/3947/778">racial attitudes</a><span style="color:rgb(42, 42, 42)">&nbsp;or&nbsp;</span><a href="https://www.tandfonline.com/doi/abs/10.1080/08913811.2010.508634">political views</a><span style="color:rgb(42, 42, 42)">, to mundane issues like whether a hypothetical person should take a&nbsp;</span><a href="https://psycnet.apa.org/record/1976-26005-001">risky or safe option</a><span style="color:rgb(42, 42, 42)">, or even&nbsp;</span><a href="https://www.sciencedirect.com/science/article/abs/pii/S0022103196900244">how comfortable a dental chair is</a><span style="color:rgb(42, 42, 42)">.</span><br /><br /><span style="color:rgb(42, 42, 42)">In all these cases, people in a discussion group start out varied in their opinions, but predominantly leaning in one direction. When they discuss, their opinions tend to cluster together (become more uniform) and move in the&nbsp; direction of the group&rsquo;s initial inclination (become more extreme). &nbsp;</span><br /><br /><span style="color:rgb(42, 42, 42)">For example, consider this question: Is Covid a threat to societal stability? My friends clearly disagreed about this, as did their social circles. Let&rsquo;s represent people&rsquo;s opinions on the issue as dots on a line, with the right end of the line being &ldquo;definitely a threat&rdquo; and the left end being &ldquo;definitely not a threat&rdquo;. Let&rsquo;s call my relaxed friend&rsquo;s social circle "the Reds", and my panicked friend&rsquo;s social circle "the Blues". Initially, through randomness or some other mechanism (more on that in a moment), Blues start out on average more concerned than Reds, but with wide variation:</span></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/20-3-21-polarization1_orig.jpeg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph" style="text-align:justify;">Reds talk to Reds; Blues talk to Blues. What happens? The two groups pull apart:</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/20-3-21-polarization2_orig.jpeg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph">That&rsquo;s the group polarization effect. It&rsquo;s <a href="https://www.sciencedirect.com/science/article/pii/S0065260108600945">especially strong</a> when there is substantial variation and uncertainty in people&rsquo;s initial opinions, as we&rsquo;ve seen there will be with Covid. Thus small variations in the initial reactions of social circles will ramify, leading to substantial group-level differences.<br /><br />&#8203;When we add to this the well-documented <a href="https://www.people-press.org/2014/06/12/political-polarization-in-the-american-public/">social divides between Democrats and Republicans</a>, it&rsquo;s no surprise that systematic differences would emerge between the groups. Moreover the <em>direction</em> of those divergences (with Republicans less concerned than Democrats) is no surprise either, given that the news sources frequented by the two parties had differing initial reactions to Covid.<br /><br /><strong>In short:</strong> Given partisan social separation and some initial nudges from the news, the complexity of Covid-information combined with the group polarization effect to sweep the parties apart. That is my story for the prime driver of pandemic polarization.&nbsp;<br /><br />Suppose we accept it. Then we face an evaluative question:<strong> What are we to make of it?</strong> Should we think of the members of each party as rational or irrational? And&mdash;closely related&mdash;when people disagree with us about the seriousness of Covid, should we <em>blame</em> them for doing so? &nbsp;<br /><br />I think we should not.&nbsp;<br /><br />(To be clear, I&rsquo;m thinking of ordinary party members&mdash;rather than political leaders&mdash;since leaders&rsquo; reactions are subject to very different norms and forces.)</div>  <div class="paragraph" style="text-align:right;"><font color="#818181" size="2">(3 minutes left.)</font></div>  <div class="paragraph"><u style="color:rgb(42, 42, 42)"><strong>Opinion Pooling</strong></u><span style="color:rgb(42, 42, 42)">&nbsp;&nbsp;</span><br /><span style="color:rgb(42, 42, 42)">The case for this is relatively simple: the widely-accepted explanations for group polarization are&nbsp;</span><em style="color:rgb(42, 42, 42)">rational</em><span style="color:rgb(42, 42, 42)">&nbsp;explanations&mdash;people are responding as they should given the evidence from their peers&rsquo; opinions.&nbsp;</span><br /><br /><span style="color:rgb(42, 42, 42)">This claim will be half-controversial. It is widely accepted that there are two drivers of group polarization: (1) information sharing, and (2) social influences (e.g.&nbsp;</span><a href="https://psycnet.apa.org/record/1986-24477-001">Isenberg 1986</a><span style="color:rgb(42, 42, 42)">;&nbsp;</span><a href="https://www.yalelawjournal.org/pdf/449_3p1xtbdh.pdf">Sunstein 2000</a><span style="color:rgb(42, 42, 42)">,&nbsp;</span><a href="https://global.oup.com/academic/product/going-to-extremes-9780195378016?cc=gb&amp;lang=en&amp;">2009</a><span style="color:rgb(42, 42, 42)">;&nbsp;</span><a href="https://www.amazon.co.uk/Social-Psychology-David-Myers/dp/0078035295">Myers 2012</a><span style="color:rgb(42, 42, 42)">).</span><br /><br /><span style="color:rgb(42, 42, 42)">(1) Information sharing is straightforward. When people discuss, they share knowledge, information, and arguments. If the majority of the group leans one way on an issue, then that means they will collectively hold more bits of information favoring that direction than disfavoring it. Thus when that information is pooled, it will push the group-members&rsquo; opinions in that direction.</span><br /><br /><span style="color:rgb(42, 42, 42)">This is clearly a rational mechanism. Suppose you and I have each been checking the news and each have a long list of reasons to be worried about Covid. If we start talking to each other and each find out that the other person&nbsp;</span><em style="color:rgb(42, 42, 42)">also</em><span style="color:rgb(42, 42, 42)">&nbsp;such a list, clearly we should each become more worried!</span><br /><br /><span style="color:rgb(42, 42, 42)">(2) Social influences are more subtle&mdash;here is where my claim of rationality will be controversial. The standard story is called &ldquo;</span><a href="https://en.wikipedia.org/wiki/Social_comparison_theory">social comparison theory</a><span style="color:rgb(42, 42, 42)">&rdquo;, which says that when people see what others&rsquo; opinions are on an issue, they adjust their own positions in order to hold an opinion that (they think) is favored by the group (</span><a href="https://psycnet.apa.org/record/1976-26005-001">Myers and Lamm 1978</a><span style="color:rgb(42, 42, 42)">,&nbsp;</span><a href="https://www.amazon.co.uk/Social-Psychology-David-Myers/dp/0078035295">Myers 2012</a><span style="color:rgb(42, 42, 42)">,&nbsp;</span><a href="https://psycnet.apa.org/record/1986-24477-001">Isenberg 1986</a><span style="color:rgb(42, 42, 42)">). The idea is that if you see people worrying about Covid, then even if you don&rsquo;t take there to be good reason to be too worried, you will act as if you do in order to maintain your status within the group.&nbsp;</span><br /><br /><span style="color:rgb(42, 42, 42)">If that description sounds a bit off, I think that&rsquo;s because it should. It&rsquo;s a strange person who consciously adopts a more extreme position about Covid than the one they think is warranted. Much more relatable is the person who sees that their friends are worried, and gets worried too!</span><br /><br /><span style="color:rgb(42, 42, 42)">So why should we buy the &rdquo;social comparison&rdquo; explanation? The primary evidence for it is the following (</span><a href="https://psycnet.apa.org/record/1976-26005-001">Myers and Lamm 1978</a><span style="color:rgb(42, 42, 42)">,&nbsp;</span><a href="https://www.amazon.co.uk/Social-Psychology-David-Myers/dp/0078035295">Myers 2012</a><span style="color:rgb(42, 42, 42)">,&nbsp;</span><a href="https://psycnet.apa.org/record/1986-24477-001">Isenberg 1986</a><span style="color:rgb(42, 42, 42)">): when people are exposed to others&rsquo; opinions&nbsp;</span><em style="color:rgb(42, 42, 42)">without</em><span style="color:rgb(42, 42, 42)">&nbsp;any chance to discuss them, they nonetheless move their opinions in the direction of others&rsquo;.&nbsp; So&mdash;the thought goes&mdash;since they aren&rsquo;t getting any new&nbsp;</span><em style="color:rgb(42, 42, 42)">arguments</em><span style="color:rgb(42, 42, 42)">&nbsp;or&nbsp;</span><em style="color:rgb(42, 42, 42)">information</em><span style="color:rgb(42, 42, 42)">, they must be changing their opinions for non-informational reasons.</span><br /><br /><span style="color:rgb(42, 42, 42)">To my ears&mdash;and, I suspect, to those of many epistemologists&mdash;this is&nbsp; strange claim. It is a mistake to assume that if all you&rsquo;re learning is other people&rsquo;s opinions about Covid, then you are thereby not getting substantial&nbsp;</span><em style="color:rgb(42, 42, 42)">information</em><span style="color:rgb(42, 42, 42)">&nbsp;about Covid. In fact, there&rsquo;s an&nbsp;</span><a href="https://plato.stanford.edu/entries/disagreement/">entire literature</a><span style="color:rgb(42, 42, 42)">&nbsp;in epistemology that&rsquo;s devoted to exactly these &ldquo;peer (dis)agreement&rdquo; informational effects. If I learn that you think that Covid is a threat to societal stability, that gives me reason to think that&nbsp;</span><em style="color:rgb(42, 42, 42)">you</em><span style="color:rgb(42, 42, 42)">&nbsp;have reason to think that Covid is a threat to societal stability&mdash;and, therefore,&nbsp; it gives&nbsp;</span><em style="color:rgb(42, 42, 42)">me</em><span style="color:rgb(42, 42, 42)">&nbsp;reason to think it&rsquo;s a threat too. Moreover, this is&nbsp;</span><em style="color:rgb(42, 42, 42)">especially</em><span style="color:rgb(42, 42, 42)">&nbsp;true when the relevant evidence is so noisy and ambiguous that I should be quite unsure how to react to it.&nbsp;</span><br /><br /><span style="color:rgb(42, 42, 42)">&#8203;Thus once we note that our friends&rsquo; opinions carry massive amounts of information in situations like this, we should expect such &ldquo;social comparison&rdquo; effects to play a small role compared with the informational effects in explaining group polarization.</span><br /><br /><u style="color:rgb(42, 42, 42)"><strong>Conclusion</strong></u><br /><span style="color:rgb(42, 42, 42)">We shouldn&rsquo;t take pandemic polarization as an indicator of motivated reasoning or irrationality. Once we realize just how hard it is to interpret the hard facts, how limited and separated our trusted social ranges are, and (thus) how much of our information comes from our peer&rsquo;s reactions, we should expect these effects from reasonable people.</span><br /><br /><span style="color:rgb(42, 42, 42)">If this is right, it means that we should&nbsp;</span><em style="color:rgb(42, 42, 42)">not</em><span style="color:rgb(42, 42, 42)">&nbsp;take pandemic polarization as yet another reason to hurl &ldquo;irrational&rdquo;, &ldquo;biased&rdquo;, and other terms of abuse at our political opponents. &nbsp;</span><br /><br /><span style="color:rgb(42, 42, 42)">What&nbsp;</span><em style="color:rgb(42, 42, 42)">should</em><span style="color:rgb(42, 42, 42)">&nbsp;we hurl at them, then? &nbsp;</span><br /><br /><span style="color:rgb(42, 42, 42)">Maybe&mdash;for once&mdash;nothing. Maybe we should take this pandemic as a chance to remember that we face bigger threats than our political opponents, that our deep differences pale in comparison to our shared susceptibilities, and that we can do so much more when we work&nbsp;</span><em style="color:rgb(42, 42, 42)">with</em><span style="color:rgb(42, 42, 42)">&nbsp;the "other side&rdquo; than when we fight against them.</span><br /><br /><br /><span style="color:rgb(42, 42, 42)">What next?</span><br /><strong style="color:rgb(42, 42, 42)">For more on political unity in this moment</strong><span style="color:rgb(42, 42, 42)">, see&nbsp;</span><a href="https://www.nytimes.com/2020/03/18/podcasts/the-daily/cuomo-new-york-coronavirus.html?action=click&amp;module=audio-series-bar&amp;region=header&amp;pgtype=Article">this interview</a><span style="color:rgb(42, 42, 42)">&nbsp;with New York Governor Andrew Cuomo (starting at 20:30). Also check out&nbsp;</span><a href="https://nl.nytimes.com/f/newsletter/ocBKwmTxxohqABkSwLHbXg~~/AAAAAQA~/RgRgVne3P0TuaHR0cHM6Ly93d3cubnl0aW1lcy5jb20vMjAyMC8wMy8xOS93ZWxsL2ZhbWlseS9ob3ctY2FuLXdlLWhlbHAtb25lLWFub3RoZXIuaHRtbD90ZT0xJm5sPWNvcm9uYXZpcnVzLWJyaWVmaW5nJmVtYz1lZGl0X2NiXzIwMjAwMzE5JmNhbXBhaWduX2lkPTE1NCZpbnN0YW5jZV9pZD0xNjkxNSZzZWdtZW50X2lkPTIyNDAyJnVzZXJfaWQ9MDAzNzdjMmQzYjllMTg0YTc5ZjAyNTY0N2YzYjk0M2ImcmVnaV9pZD04NDMyNjIwNFcDbnl0QgoAJbfyc14ND-jDUhVrZXZpbmRvcnN0MEBnbWFpbC5jb21YBAAAAAA~">this article</a><span style="color:rgb(42, 42, 42)">&nbsp;on how we can band together.</span><br /><strong style="color:rgb(42, 42, 42)">For more on the rationality of polarization</strong><span style="color:rgb(42, 42, 42)">, see&nbsp;</span><a href="https://phenomenalworld.org/analysis/why-rational-people-polarize">this piece</a><span style="color:rgb(42, 42, 42)">&nbsp;in the&nbsp;</span><em style="color:rgb(42, 42, 42)">Phenomenal World</em><span style="color:rgb(42, 42, 42)">, as well as articles like&nbsp;</span><a href="http://imperfectcognitions.blogspot.com/2019/03/the-misinformation-age-how-false.html?utm_source=feedburner&amp;utm_medium=twitter&amp;utm_campaign=Feed%3A+ImperfectCognitions+%28Imperfect+Cognitions%29">this one</a><span style="color:rgb(42, 42, 42)">&nbsp;by Cailin O&rsquo;Connor and&nbsp;</span><a href="https://penntoday.upenn.edu/news/polarization-can-happen-even-when-rational-people-listen-each-other">this one</a><span style="color:rgb(42, 42, 42)">&nbsp;on the work of Daniel Singer and others.</span><br /><strong style="color:rgb(42, 42, 42)">For more updates on related topics</strong><span style="color:rgb(42, 42, 42)">,&nbsp;</span><a href="https://twitter.com/kevin_dorst">follow me on Twitter</a><span style="color:rgb(42, 42, 42)">&nbsp;or&nbsp;</span><a href="https://mailchi.mp/279517050568/stranger_apologies_signup">subscribe to the newsletter</a><span style="color:rgb(42, 42, 42)">.</span></div>]]></content:encoded></item><item><title><![CDATA[The Rational Question]]></title><link><![CDATA[https://www.kevindorst.com/stranger_apologies/the-rational-question]]></link><comments><![CDATA[https://www.kevindorst.com/stranger_apologies/the-rational-question#comments]]></comments><pubDate>Sat, 14 Mar 2020 14:25:34 GMT</pubDate><category><![CDATA[Uncategorized]]></category><guid isPermaLink="false">https://www.kevindorst.com/stranger_apologies/the-rational-question</guid><description><![CDATA[I just published a new&nbsp;piece in the Oxonian Review.&nbsp;It argues that a general problem with claimed demonstrations of irrationality is their reliance on standard economic models of rational belief and action, and illustrates the point by explaining some great work by Tom Kelly on the sunk cost fallacy and by Brian Hedden on hindsight bias.Check out the full article here. [...] ]]></description><content:encoded><![CDATA[<div class="paragraph">I just published a <a href="http://www.oxonianreview.org/wp/the-rational-question/" target="_blank">new&nbsp;piece in the <em>Oxonian Review</em></a>.&nbsp;It argues that a general problem with claimed demonstrations of irrationality is their reliance on standard economic models of rational belief and action, and illustrates the point by explaining some great work by <a href="https://philpapers.org/rec/KELSCR" target="_blank">Tom Kelly on the sunk cost fallacy</a> and by <a href="https://philpapers.org/rec/HEDHBI" target="_blank">Brian Hedden on hindsight bias</a>.<br /><br />Check out the <a href="http://www.oxonianreview.org/wp/the-rational-question/" target="_blank">full article here</a>.</div>]]></content:encoded></item><item><title><![CDATA[Why Rationalize? Look and See.]]></title><link><![CDATA[https://www.kevindorst.com/stranger_apologies/why-rationalize-look-and-see]]></link><comments><![CDATA[https://www.kevindorst.com/stranger_apologies/why-rationalize-look-and-see#comments]]></comments><pubDate>Mon, 24 Feb 2020 19:25:10 GMT</pubDate><category><![CDATA[Uncategorized]]></category><guid isPermaLink="false">https://www.kevindorst.com/stranger_apologies/why-rationalize-look-and-see</guid><description><![CDATA[2400 words; 10 minute read.I bet you’re underestimating yourself.Humor me with a simple exercise. When I say so, close your eyes, turn around, and flicker them open for just a fraction of the second. Note the two most important objects you see, along with their relative positions.Ready? Go.I bet you succeeded. Why is that interesting? Because the “simple” exercise you just performed requires solving a maddeningly difficult computational problem. And the fact that you solved it bears on the [...] ]]></description><content:encoded><![CDATA[<div class="paragraph"><font size="2" color="#818181">2400 words; 10 minute read.</font><br><br>I bet you&rsquo;re underestimating yourself.<br><br>Humor me with a simple exercise. When I say so, close your eyes, turn around, and flicker them open for just a fraction of the second. Note the two most important objects you see, along with their relative positions.<br><br>Ready? Go.<br><br>I bet you succeeded. Why is that interesting? Because the &ldquo;simple&rdquo; exercise you just performed requires solving a maddeningly difficult computational problem. And the fact that you solved it bears on the question of how rational the human mind is.</div><div><!--BLOG_SUMMARY_END--></div><div class="paragraph"><font color="#2A2A2A"><a href="https://www.kevindorst.com/stranger_apologies/plea_for_political_empathy" target="_blank">As I&rsquo;ve said before</a>, the point of this blog is to scrutinize the irrationalist narratives that dominate modern psychology. These narratives tell us that people&rsquo;s core reasoning methods consist of a collection of simple heuristics that lead to mistakes that are systematic, pervasive, important, and <em>silly</em>&mdash;that is, in principle easily correctable. (Again, see authors like <a href="https://www.goodreads.com/book/show/357666.A_Mind_of_Its_Own" target="_blank">Fine 2005</a>, <a href="https://smile.amazon.com/Predictably-Irrational-Revised-Expanded-Decisions/dp/0061353248/ref=sr_1_1?keywords=predictably+irrational&amp;qid=1580818921&amp;sr=8-1" target="_blank">Ariely 2008</a>, <a href="https://smile.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555/ref=sr_1_3?keywords=thinking+fast+and+slow&amp;qid=1580818954&amp;sr=8-3" target="_blank">Kahneman 2011</a>, <a href="https://www.amazon.co.uk/Superforecasting-Science-Prediction-Philip-Tetlock/dp/0804136718" target="_blank">Tetlock and Gardner 2016</a>, <a href="https://www.amazon.co.uk/Misbehaving-Behavioural-Economics-Richard-Thaler/dp/1846144035" target="_blank">Thaler 2016</a>, and many others.)<br><br>&#8203;Much of this project will involve scrutinizing particular irrationalist interpretations of particular empirical findings in order to offer rational alternatives to them. But it&rsquo;s worth also spending time on a <strong>bigger-picture question</strong>: How open should we be to those rational alternatives? When we face conflicting rational and irrational explanations of some empirical phenomenon, which should we be more inclined to accept?&nbsp;<br><br>Given the dominance of the irrationalist research program in psychology, you might incline strongly toward the irrationalist explanation. Many do. The point of this post (and others to follow) is to question that inclination&ndash;&ndash;to suggest that rational alternatives have more going for them than you might have thought. &nbsp;<br><br>I should say now that this argument doesn&rsquo;t claim to be decisive&ndash;&ndash;there are plenty of ways to resist. But there are also ways to resist that resistance, and to resist the resistance to that resistance to that resistance, and&hellip; &hellip;and of course none of us have time for that in a blog post. Instead, I just want to get the basic argument on the table.</font></div><div class="paragraph" style="text-align:right;"><font color="#818181" size="2">(2000 words left)</font></div><div class="paragraph"><span style="color:rgb(42, 42, 42)"><strong><u>Two Systems</u></strong></span><br><span style="color:rgb(42, 42, 42)">I&rsquo;m going to make this argument by contrasting the scientific literature on two different cognitive systems. No, not Kahneman&rsquo;s famous&nbsp;</span><a href="https://en.wikipedia.org/wiki/Dual_process_theory#Systems" target="_blank">&ldquo;System 1&rdquo; and &ldquo;System 2&rdquo;</a><span style="color:rgb(42, 42, 42)">. Rather, two much more familiar aspects of your cognitive life: what you see, and what you believe&mdash;as I&rsquo;ll call them, your &ldquo;visual system&rdquo; and your &ldquo;judgment system&rdquo;. (I have no claim to expertise on either of these literatures, but I do think my rough characterization is accurate enough&mdash;if you think otherwise, please tell me!)</span><br><br><u style="color:rgb(42, 42, 42)">Start with your visual system</u><span style="color:rgb(42, 42, 42)">. According to our best science, what is the purpose of your visual system, the problem it faces, and the solution it offers?</span><br><br><em style="color:rgb(42, 42, 42)">The Purpose:</em><span style="color:rgb(42, 42, 42)">&nbsp;To help you navigate the world. More precisely, to (very) quickly build an accurate 3D map of your spatial surroundings so that you can effectively respond to a dynamic environment</span><br><br><em style="color:rgb(42, 42, 42)">The Problem:</em><span style="color:rgb(42, 42, 42)">&nbsp;Recovering a 3D map of your surroundings based on a 2D projection of light onto your retinas is what&rsquo;s sometimes called an &ldquo;</span><a href="https://www.nature.com/articles/317314a0" target="_blank">ill-posed problem</a><span style="color:rgb(42, 42, 42)">&rdquo;&ndash;&ndash;there is no unique solution. For example, suppose (rather surprisingly) upon doing my exercise you saw a scene like this:</span></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/wine-glasses.jpg?1582576855" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%"></div></div></div><div class="paragraph"><br><br>Where would you place the relative positions of the wine glasses? Equally close, no doubt. (And you&rsquo;d be correct.) But strictly speaking that is underdetermined by the picture&ndash;&ndash;you&rsquo;d see the exact same pattern of light if instead of two normal-sized wine glasses equally close, one of them were bigger and further away.&nbsp; (In more complex scenes, more complex alternatives would do the same trick.)<br><br>In other words, one of the core problems your visual system faces is that you cannot deduce with certainty the correct 3D map of your surroundings from the 2D projection of them onto your eyes.&nbsp; And yet&ndash;&ndash;somehow&ndash;&ndash;you <em>do</em> manage to come to the right conclusions, virtually every waking minute. &nbsp;<br><br>How do you do it?<br><br><em>The Solution:</em> The <a href="https://www.cns.nyu.edu/malab/vss-2017/Bayesian%20tutorial.pdf" target="_blank">arguably-most-successful</a> models of how you manage to do it fall within the class of <a href="https://www.sciencedirect.com/science/article/pii/S0042698910004724" target="_blank">Bayesian &ldquo;ideal observer&rdquo; models</a>. These models ask, &ldquo;What would an ideally rational agent who optimally used incoming information conclude about the visual scene?&rdquo; These models use literally the same formalism as the <a href="https://plato.stanford.edu/entries/epistemology-bayesian/" target="_blank">standard model of rational belief</a> that is <a href="https://en.wikipedia.org/wiki/Behavioral_economics" target="_blank">so maligned</a> by psychologists working on the judgment system&ndash;&ndash;more on that in a moment.<br><br>&#8203;<a href="http://www.scholarpedia.org/article/Computational_models_of_visual_attention#The_computer_vision_branch" target="_blank">Regardless of the details</a>, one thing that is certain is that your visual system uses incredibly sophisticated computations to solve this problem&ndash;&ndash;ones that easily avoid the mistakes that the most sophisticated computer vision systems still routinely make. (And they still make quite laughable mistakes&ndash;&ndash;here&rsquo;s how a state-of-the-art computer vision program classified the following picture:</div><div><div id="255423179986781446" align="left" style="width: 100%; overflow-y: hidden;" class="wcustomhtml"><blockquote class="twitter-tweet tw-align-center"><p lang="en" dir="ltr">A large cake with a bunch of birds on it . <a href="http://t.co/AEMfDDL8mC">pic.twitter.com/AEMfDDL8mC</a></p>&mdash; INTERESTING.JPG (@INTERESTING_JPG) <a href="https://twitter.com/INTERESTING_JPG/status/616732059659304960?ref_src=twsrc%5Etfw">July 2, 2015</a></blockquote></div></div><div class="wsite-spacer" style="height:37px;"></div><div class="paragraph">For other funny examples, see <a href="https://twitter.com/interesting_jpg" target="_blank">this Twitter feed</a> or <a href="https://www.cambridge.org/core/services/aop-cambridge-core/content/view/A9535B1D745A0377E16C590E14B94993/S0140525X16001837a.pdf/building_machines_that_learn_and_think_like_people.pdf" target="_blank">page 16 of this paper</a>.)<br><br><em>The Narrative:</em> In short, our best science tells us that to solve the ill-posed but all-important problem faced by our visual system, our brains have evolved to implement (approximations of) the rationally optimal response to incoming information. Of course, <em>sometimes</em> these mechanisms lead to systematic errors&ndash;&ndash;as in the famous Muller-Lyer illusion, wherein lines of the same length appear to differ due to the angles attached to them:</div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"><a><img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/screen-shot-2020-02-24-at-8-44-33-pm.png?1582577089" alt="Picture" style="width:368;max-width:100%"></a><div style="display:block;font-size:90%"></div></div></div><div class="paragraph"><span style="color:rgb(42, 42, 42)">But&ndash;&ndash;by the near-optimal design of the system-&ndash;these mistakes are necessarily either rare or unimportant in everyday life. (Sure enough, the Muller-Lyer effect leads to&nbsp;</span><a href="https://en.wikipedia.org/wiki/M%C3%BCller-Lyer_illusion#Perspective_explanation" target="_blank">accurate perceptions of length</a><span style="color:rgb(42, 42, 42)">&nbsp;in normal scenarios.)</span><br><br><span style="color:rgb(42, 42, 42)">&#8203;The punchline? The human visual system is an amazingly rational inference machine.</span><br><br><u style="color:rgb(42, 42, 42)">Now turn to your judgment system</u><span style="color:rgb(42, 42, 42)">. According to our best science, what is the purpose of your judgment system, the problem it faces, and the solution it offers?&nbsp;</span><br><br><em style="color:rgb(42, 42, 42)">The Purpose:</em><span style="color:rgb(42, 42, 42)">&nbsp;To help you navigate the world. More precisely, to build an (abstract) map of the way the world is so that you can use that information to help guide your actions. (This is a pure &ldquo;</span><a href="https://plato.stanford.edu/entries/belief/#Func" target="_blank">functionalist</a><span style="color:rgb(42, 42, 42)">&rdquo; picture of judgment/belief. One way to resist is to argue that belief has important social/reputational/motivational roles&ndash;&ndash;see e.g.&nbsp;</span><a href="https://philpapers.org/rec/QUIURO" target="_blank">Jake Quilty-Dunn&rsquo;s</a><span style="color:rgb(42, 42, 42)">&nbsp;new paper on this.)</span><br><br><em style="color:rgb(42, 42, 42)">The Problem:</em><span style="color:rgb(42, 42, 42)">&nbsp;Recovering an accurate judgment about the world based on limited evidence is an &ldquo;ill-posed problem&rdquo; in exactly the same sense that recovering a 3D map from a 2D retinal projection is&ndash;&ndash;there is no unique solution. &nbsp;</span><br><br><span style="color:rgb(42, 42, 42)">Take a mundane example: given your limited evidence about me, it&rsquo;s impossible for you to determine with certainty whether I will walk 3 miles today. (All your evidence could be the same, and I could decide right now to go do so.) Nevertheless, I suspect you&rsquo;re quite confident that I won&rsquo;t. And, again, you are correct. But again, that fact was underdetermined by your evidence.</span><br><br><em style="color:rgb(42, 42, 42)">The Solution:</em><span style="color:rgb(42, 42, 42)">&nbsp;The most popular models of how people manage to make judgments in an uncertain world is that they use a variety of&nbsp;</span><a href="https://en.wikipedia.org/wiki/Heuristics_in_judgment_and_decision-making" target="_blank">useful but simple &ldquo;heuristics&rdquo;</a><span style="color:rgb(42, 42, 42)">. These rules of thumb are so simple that they lead to systematic and severe mistakes, resulting in&nbsp;</span><a href="https://en.wikipedia.org/wiki/Overconfidence_effect" target="_blank">massive levels of overconfidence</a><span style="color:rgb(42, 42, 42)">. (See my&nbsp;</span><a href="https://www.kevindorst.com/stranger_apologies/overconfidence" target="_blank">post on overconfidence</a><span style="color:rgb(42, 42, 42)">&nbsp;for a critique of that final claim.) &nbsp;</span><br><br><span style="color:rgb(42, 42, 42)">By simple, I mean&nbsp;</span><em style="color:rgb(42, 42, 42)">simple</em><span style="color:rgb(42, 42, 42)">. These are heuristics like: forming your belief about whether I&rsquo;ll walk 3 miles today based on a&nbsp;</span><a href="https://en.wikipedia.org/wiki/Take-the-best_heuristic" target="_blank">single piece of evidence</a><span style="color:rgb(42, 42, 42)">, or based on whether such an activity is &ldquo;</span><a href="https://en.wikipedia.org/wiki/Representativeness_heuristic" target="_blank">representative</a><span style="color:rgb(42, 42, 42)">&rdquo; or&rdquo;prototypical&rdquo; of what you know about me; or picking a&nbsp;</span><a href="https://medium.com/mind-cafe/to-guard-against-manipulation-beware-of-the-anchoring-effect-532d48bd28a2" target="_blank">near-arbitrary</a><span style="color:rgb(42, 42, 42)">&nbsp;estimate of how much I walk on a given day as an &ldquo;</span><a href="https://en.wikipedia.org/wiki/Anchoring" target="_blank">anchor</a><span style="color:rgb(42, 42, 42)">&rdquo;, and then expanding it (</span><a href="https://journals.sagepub.com/doi/10.1177/0146167203261889" target="_blank">insufficiently</a><span style="color:rgb(42, 42, 42)">) to settle on your guess of how far I&rsquo;ll walk today.</span><br><br><a href="https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow#Heuristics_and_biases" target="_blank">Regardless of the details</a><span style="color:rgb(42, 42, 42)">, one thing we are assured is that the judgment system uses incredibly&nbsp;</span><em style="color:rgb(42, 42, 42)">un</em><span style="color:rgb(42, 42, 42)">sophisticated mechanisms to solve this problem&ndash;&ndash;ones that often make mistakes that even the&nbsp;</span><a href="https://www.amazon.co.uk/Superforecasting-Science-Prediction-Philip-Tetlock/dp/1511358491" target="_blank">simplest rational models</a><span style="color:rgb(42, 42, 42)">&nbsp;would avoid (like&nbsp;</span><a href="https://en.wikipedia.org/wiki/Conjunction_fallacy" target="_blank">this one</a><span style="color:rgb(42, 42, 42)">&nbsp;and&nbsp;</span><a href="https://en.wikipedia.org/wiki/Base_rate_fallacy" target="_blank">this one</a><span style="color:rgb(42, 42, 42)">).</span><br><br><em style="color:rgb(42, 42, 42)">The Narrative:</em><span style="color:rgb(42, 42, 42)">&nbsp;In short, our best science tells us that to solve the ill-posed but all-important problem of forming judgments under uncertainty, our brains have evolved to implement simple heuristics that lead to systematic, pervasive, important, and silly errors. Of course, these heuristics often lead to accurate&nbsp; judgments; but&ndash;&ndash;by the poor design of the system&ndash;&ndash;they also lead to errors that are common and in important in everyday life.</span><br><br><span style="color:rgb(42, 42, 42)">The punchline? The human judgment system is a surprisingly&nbsp;</span><em style="color:rgb(42, 42, 42)">ir</em><span style="color:rgb(42, 42, 42)">rational inference machine.</span></div><div class="paragraph" style="text-align:right;"><font color="#818181" size="2">(1000 words left)</font></div><div class="paragraph" style="text-align:justify;"><u><span><strong>The Challenge</strong></span></u><span><strong>&nbsp;</strong></span><br>The contrast between these two narratives is striking&ndash;&ndash;all the more so since the basic contours of the problems are so similar.<br><br>I think we can use this contrast to buttress an &rdquo;evolutionary-optimality argument&rdquo; in favor of rational explanations. (The gist of such arguments are <a href="https://psycnet.apa.org/record/1990-98299-000" target="_blank">far from original to me</a>&ndash;&ndash;they often <a href="https://psycnet.apa.org/record/1990-98299-000" target="_blank">help motivate</a> the &ldquo;<a href="https://en.wikipedia.org/wiki/Rational_analysis" target="_blank">rational analysis</a>&rdquo; approach to cognitive science that has been <a href="https://science.sciencemag.org/content/331/6022/1279.full" target="_blank">gaining momentum</a> in recent decades.)<br><br>The argument starts with Quine&rsquo;s famous quip: &ldquo;Creatures inveterately wrong in their inductions [i.e. judgments based on limited evidence] have a pathetic but praiseworthy tendency to die before reproducing their kind&rdquo; (<a href="https://books.google.co.uk/books?hl=en&amp;lr=&amp;id=mdCoBwAAQBAJ&amp;oi=fnd&amp;pg=PA1&amp;dq=Quine+1970+%22natural+kinds%22+essays+in+honor+of+hempel&amp;ots=6bdkP6eS4L&amp;sig=UPf81gGikPlOUt8LuJKtRwoRC4w#v=onepage&amp;q&amp;f=false" target="_blank">Quine 1970</a>, 13). In other words, we have straightforward evolutionary reasons to think that people must make near-optimal use of their limited evidence in forming their beliefs&ndash;&ndash;for otherwise their ancestors would easily have been outcompeted by others who made better use of the evidence.<br><br>Of course, that grand conclusion is both too grand and too quick. There are&nbsp; several reasons the argument could go wrong&ndash;&ndash;here are three important ones:<ol><li>Perhaps the mistakes people make are <strong>not important</strong> enough to be corrected by evolutionary pressure.</li><li>Perhaps it was <strong>too difficult</strong> for evolution to hit upon reasoning methods that avoided these mistakes.</li><li>Perhaps the reasoning methods people use <em>were</em> near-optimal in our evolutionary past, but are <strong>no longer effective today</strong>.</li></ol><br><strong>My Claim:</strong>&nbsp;Although these are all legitimate concerns, they become less compelling once we see the contrast between the narratives on vision and judgment.<br><br>First, since few defenders of irrationalist explanations will say that the errors they&rsquo;ve identified are uncommon or unimportant, option (1) is a unpromising strategy. After all, much of the interest in this research (and the way it gets its funding!) comes from the fact that the reverse is supposed to be true. &nbsp;Some representative quotes:<ul><li>&ldquo;What would I eliminate if I had a magic wand? <strong>Overconfidence</strong>.&rdquo; (<a href="https://www.theguardian.com/books/2015/jul/18/daniel-kahneman-books-interview" target="_blank">Kahneman 2015</a>)</li><li>&ldquo;No problem in judgment and decision-making is more prevalent and more potentially catastrophic than <strong>overconfidence</strong>&rdquo; (<a href="https://psycnet.apa.org/record/1993-97429-000" target="_blank"><span>Plous 1993</span></a>, 213).&nbsp;</li><li>&ldquo;<span>If one were to attempt to identify a single problematic aspect of human reasoning that deserves attention above all others, the <strong>confirmation bias</strong> would have to be among the candidates for consideration... it appears to be sufficiently strong and pervasive that one is led to wonder whether the bias, by itself, might account for a significant fraction of the disputes, altercations, and misunderstandings that occur among individuals, groups, and nations.&rdquo; (<a href="http://psy2.ucsd.edu/~mckenzie/nickersonConfirmationBias.pdf" target="_blank"><span>Nickerson 1998</span></a>, 175)</span></li></ul><br>What about option (2)&ndash;&ndash;the proposal that rational Bayesian solutions to the problems faced by the judgment system were too difficult for evolution to hit upon? The narrative from vision science casts doubt on this: the visual system <em>has</em> hit upon (approximations of) the rational Bayesian solutions, suggesting that such solutions <em>were</em> evolutionarily accessible.<br><br><span>Of course, there is much more to be said about potential asymmetries between the evolutionary pressures on the visual and judgment systems&ndash;&ndash;e.g. in terms of</span> <a href="https://science.sciencemag.org/content/sci/331/6022/1279.full.pdf" target="_blank">time-scales</a> <span>(</span>how long have these systems been under such pressure?<span>) or domain complexity (are optimal solutions harder to hit for judgment than for vision (</span><a href="https://mitpress.mit.edu/books/modularity-mind" target="_blank">Fodor 1983</a><span>)?&ndash;&ndash;</span><a href="https://sites.google.com/site/tylerbrookewilson/" target="_blank">Tyler Brooke-Wilson</a> <span>has some interesting work on this). But that way lies the sequence of resistance to resistance to resistance&hellip; which none of us have time for. The simple point that Bayesian solutions were hit upon in one domain suggests&mdash;at the least&ndash;&ndash;that they are more evolutionarily accessible than we might have thought.</span><br><br><span>What about option (3)&ndash;&ndash;the claim that our judgment-forming methods were near-optimal in the evolutionary past, but not today? This may be the best response for the defender of irrationalist narratives, but it has at least two problems.</span><br><br><span>First, we are not talking about the optimality of specific methods of solving specific problems&ndash;&ndash;everyone agrees that humans are not optimal at performing long division.&nbsp; What we are talking about is the basic architecture of how people represent and reason with uncertainty. Is it (a) a broadly</span> <a href="https://en.wikipedia.org/wiki/Bayesian_inference" target="_blank">Bayesian picture</a><span>, assigning probabilities to outcomes and responding to new evidence by updating those probabilities in a rational way, or (b) a</span> <a href="https://en.wikipedia.org/wiki/Heuristics_in_judgment_and_decision-making" target="_blank">heuristics-driven picture</a><span>, governed by simple rules-of-thumb that</span> <a href="https://en.wikipedia.org/wiki/Selective_exposure_theory" target="_blank">systematically ignore evidence</a><span>, are easily affected by</span> <a href="https://en.wikipedia.org/wiki/Motivated_reasoning" target="_blank">desires</a><span>, and so on?&nbsp; There is a general framework for reasoning rationally under uncertainty&ndash;&ndash;that&rsquo;s the point of</span> <a href="https://plato.stanford.edu/entries/epistemology-bayesian/" target="_blank">Bayesian epistemology</a><span>. So the question is why evolution would&rsquo;ve hit upon this framework in one specific domain (vision), but entirely missed it in another (judgment).</span><br><br><span>Second, option (3) relies on the claim that today there is a radical break from our evolutionary past not only in the</span> <em>problems</em> <span>we face, but also in the</span> <em>types of reasoning</em> <span>that is helpful for solving those problems. The claim must be that the simple heuristics that lead to rampant mistakes today were serviceable (in fact, near-optimal) in our evolutionary history&ndash;&ndash;that people who used such heuristics would not have been outcompeted by people who reasoned better.</span><br><br><span>That claim is questionable. The problems faced by a forager looking for food or a hunter tracking prey while avoiding predators are different in substance but not in subtlety from a politician looking for a catchy slogan or a millennial writing an eye-catching tweet while avoiding saying something politically unacceptable. Why would the reasoning mechanisms that lead to systematic, pervasive, important, and silly errors in the latter domains (&ldquo;form a belief based on a single piece of evidence&rdquo;,&nbsp; &ldquo;ignore counter-evidence&rdquo;, etc.) be perfectly serviceable&ndash;&ndash;in fact, near-optimal&ndash;&ndash;in the former?</span><br><br><strong>Upshot:</strong> <span>the defender of irrationalist narratives is left with a genuine challenge: since evolution has found such ingenious solutions to the problem of</span> <em>seeing</em><span>, why has it landed on such inept solutions to the problem of</span> <em>believing</em><span>?</span><br><br><span>That challenge provides reason to doubt that our belief-forming mechanisms are really so inept after all&ndash;&ndash;to go back to that list of</span> <a href="https://en.wikipedia.org/wiki/List_of_cognitive_biases" target="_blank">200-or-so cognitive biases</a> <span>with an open mind. As we&rsquo;ll see, we can offer rational explanations of many of them.&nbsp; I think we should take those explanations seriously.</span><br><br><br><span>What next?</span><br><strong>If you want to see some of these rational alternative explanations</strong>, see this <a href="https://www.kevindorst.com/stranger_apologies/overconfidence" target="_blank">post&nbsp;on overconfidence</a> or&nbsp;<a href="https://phenomenalworld.org/analysis/why-rational-people-polarize" target="_blank">this post on confirmation bias and polarization</a>.<br><strong>If you want to learn more</strong> <span><strong>about &ldquo;rational analysis&rdquo;</strong>&nbsp;as an approach to cognitive science, chapters 1 and 6 of John Anderson&rsquo;s</span> <a href="https://www.taylorfrancis.com/books/9780203771730" target="_blank"><em>The Adaptive Character of Thought</em></a> <span>are the classic statements of the approach. For summaries of the latest research, see this</span> <a href="https://science.sciencemag.org/content/sci/331/6022/1279.full.pdf" target="_blank">2011 article in <em>Science</em></a> <span>or</span> <a href="https://www.youtube.com/watch?v=k8ppSA0FJwo" target="_blank">this TED talk</a> <span>by</span> <a href="http://news.mit.edu/2019/josh-tenenbaum-macarthur-fellowship-0925" target="_blank">MacArthur-&ldquo;genius&rdquo;-grant-recipient</a> <span>Josh Tenenbaum.</span><br><br><span>PS. Thanks to</span> <a href="https://www.draschkow.com" target="_blank">Dejan Draschkow</a> <span>for literature suggestions and</span> <a href="https://sites.google.com/site/tylerbrookewilson/home?authuser=0" target="_blank">Tyler Brooke-Wilson</a> <span>for helpful feedback on an earlier draft of this post.</span></div>]]></content:encoded></item><item><title><![CDATA[A Plea for Political Empathy]]></title><link><![CDATA[https://www.kevindorst.com/stranger_apologies/plea_for_political_empathy]]></link><comments><![CDATA[https://www.kevindorst.com/stranger_apologies/plea_for_political_empathy#comments]]></comments><pubDate>Tue, 18 Feb 2020 05:00:00 GMT</pubDate><category><![CDATA[All Most Read]]></category><category><![CDATA[Polarization]]></category><guid isPermaLink="false">https://www.kevindorst.com/stranger_apologies/plea_for_political_empathy</guid><description><![CDATA[2400 words; 10 minute read.The ProblemWe all know that&nbsp; people now disagree over political issues more strongly and more extensively than any time in recent memory. &nbsp;And&mdash;we are told&mdash;that is why politics is broken: polarization is the political problem of our age.Is it?      Polarization standardly refers to some measure of how much people disagree about a given topic&mdash;say, whether Trump is a good president. Current polling suggests that around 90% of Republicans approv [...] ]]></description><content:encoded><![CDATA[<div class="paragraph" style="text-align:justify;"><font size="2"><font color="#818181">2400 words; 10 minute read.</font></font><br /><br /><u><span><strong><font size="4">The Problem</font></strong></span></u><br />We all know that&nbsp; people now disagree over political issues <a href="https://www.people-press.org/2014/06/12/political-polarization-in-the-american-public/" target="_blank">more strongly and more extensively</a> than any time in recent memory. &nbsp;And&mdash;we are told&mdash;that is why politics is broken: polarization is the political problem of our age.<br /><br />Is it?</div>  <div>  <!--BLOG_SUMMARY_END--></div>  <div class="paragraph">Polarization standardly refers to <a href="https://www.tandfonline.com/doi/abs/10.1080/0022250X.2016.1147443" target="_blank">some measure</a> of how much people disagree about a given topic&mdash;say, whether Trump is a good president. <a href="https://news.gallup.com/poll/203198/presidential-approval-ratings-donald-trump.aspx" target="_blank">Current polling</a> suggests that around 90% of Republicans approve of Trump, while only around 10% of Democrats do.&nbsp; We are massively polarized&mdash;and that is the problem. Right?<br /><br />Not obviously. Consider religion. On any metric of polarization, Americans have long been massively polarized on religious questions&mdash;even those that crisscross political battle lines. For example, <a href="https://news.gallup.com/poll/210704/record-few-americans-believe-bible-literal-word-god.aspx" target="_blank">84% of Christians</a> believe that the Bible is divinely inspired, compared to only 32% of the religiously unaffiliated (who now make up over a <a href="https://www.pewforum.org/2019/10/17/in-u-s-decline-of-christianity-continues-at-rapid-pace/" target="_blank">quarter of the population</a>). Yet few in the United States think that <em>religious</em> polarization is the political problem of our age. Why? Because most Americans who disagree about (say) the origin of the bible have <a href="https://www.pewforum.org/2019/11/15/americans-trust-both-religious-and-nonreligious-people-but-most-rarely-discuss-religion-with-family-or-friends/#trustworthy" target="_blank">learned to get along</a>: large majorities maintain friendships across the religious/non-religious divide, trust members of each group equally, and are happy to agree to disagree. In short: although Americans are polarized on religious questions, they do not generally <em>demonize</em> each other as a result.<br /><br />Contrast politics. Whatever your opinion about whether Trump is a good president, consider your attitude toward the large plurality who disagree with you&mdash;the &ldquo;other side.&rdquo; Obviously you think they are wrong&mdash;but I&rsquo;m guessing you&rsquo;re inclined to say more than that. I&rsquo;m guessing you&rsquo;re inclined to say that they are <em>irrational</em>. Or <em>biased</em>. Or (let&rsquo;s be frank) <em>dumb</em>.<br /><br />You are not alone. A <a href="https://www.people-press.org/2016/06/22/partisanship-and-political-animosity-in-2016/" target="_blank">2016 PEW report</a> found that majorities of both Democrats and Republicans think that people from the opposite party are more &ldquo;close-minded&rdquo; than other Americans, and large pluralities think they are more &ldquo;dishonest&rdquo;, &ldquo;immoral&rdquo;, and &ldquo;unintelligent.&rdquo;<br /><br />&#8203;This is new. Between 1994 and 2016 the percentage of Republicans who had a &ldquo;very unfavorable&rdquo; attitude toward Democrats <a href="https://www.people-press.org/2016/06/22/1-feelings-about-partisans-and-the-parties/" target="_blank">rose from 21% to 58%</a>, and the parallel rise for Democrats&rsquo; attitudes&rsquo; toward Republicans was from 17% to 55%:</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/demonization-graphs_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph" style="text-align:justify;">So we do have a problem, and polarization is certainly part of it. But there is a case to be made that the crux of the problem is less that we <em>disagree </em>with each other, and more that we <em>despise</em> each other as a result.<br /><br />In a slogan: <strong>The problem isn&rsquo;t mere polarization&mdash;it&rsquo;s demonization</strong>.<br /><br />If this is right, it&rsquo;s important.&nbsp; If mere polarization were the problem, then to address it the opposing sides would have to come to agree&mdash;and the prospects for that look dim. But if <em>demonization</em> is a large part of the problem, then to address it we don&rsquo;t need to agree. Rather, what we need is to recover our <strong>political empathy</strong>: to be able to look at the other side and think that although they are wrong, they are <em>not</em> irrational. Or biased. Or dumb.<br /><br />The case of religion shows that it&rsquo;s possible to do this while still harboring profound disagreements: polarization doesn&rsquo;t <em>imply</em> demonization. And that raises a question: When it comes to politics, why do we suddenly feel the need to demonize those we disagree with? How have we come to be so profoundly lacking in political empathy?<br /><br /><u><span><strong><font size="4">The Story</font></strong></span></u><br />Here, I think, is part of the story.<br /><br />Though &ldquo;rational animal&rdquo; used to be our catch-phrase, the late 20<span>th</span> century witnessed a major change in the cultural narrative on human nature.&nbsp; The potted history:&nbsp;</div>  <div><div class="wsite-multicol"><div class="wsite-multicol-table-wrap" style="margin:0 -15px;"> 	<table class="wsite-multicol-table"> 		<tbody class="wsite-multicol-tbody"> 			<tr class="wsite-multicol-tr"> 				<td class="wsite-multicol-col" style="width:1.9757375819256%; padding:0 15px;"> 					 						  <div class="wsite-spacer" style="height:50px;"></div>   					 				</td>				<td class="wsite-multicol-col" style="width:93.163625978491%; padding:0 15px;"> 					 						  <div class="paragraph"><font size="3">Once psychologists started probing the assumption that people are rational, they found that it couldn&rsquo;t be further from the truth. Instead, people use a grab-bag of simple &rdquo;heuristics&rdquo; in their everyday reasoning that result in systematic, deep, and (in principle) easily avoidable biases. Rather than a paragon of rationality, humanity turned out to be the epitome of&nbsp;irrationality</font></div>   					 				</td>				<td class="wsite-multicol-col" style="width:4.8606364395838%; padding:0 15px;"> 					 						  <div class="wsite-spacer" style="height:50px;"></div>   					 				</td>			</tr> 		</tbody> 	</table> </div></div></div>  <div class="paragraph"><span style="color:rgb(42, 42, 42)">You can see the results yourself. Search &ldquo;cognitive biases&rdquo;, and wikipedia will offer up a list of&nbsp;</span><a href="https://en.wikipedia.org/wiki/List_of_cognitive_biases" target="_blank">nearly 200 of them</a><span style="color:rgb(42, 42, 42)">&mdash;each one discussed by anywhere from a few dozen to&nbsp;</span><a href="https://scholar.google.com/scholar?hl=en&amp;as_sdt=2005&amp;sciodt=0%2C5&amp;cites=13329790376729000594&amp;scipsc=&amp;q=Availability%3A+A+heuristic+for+judging+frequency+and+probability&amp;btnG=" target="_blank">over 10,000</a><span style="color:rgb(42, 42, 42)">&nbsp;scientific articles.&nbsp; You will be told that people are inept at (</span><a href="https://en.wikipedia.org/wiki/Conjunction_fallacy" target="_blank">many of</a><span style="color:rgb(42, 42, 42)">) the&nbsp;</span><a href="https://en.wikipedia.org/wiki/Base_rate_fallacy" target="_blank">basic principles</a><span style="color:rgb(42, 42, 42)">&nbsp;of reasoning under uncertainty; that they close-mindedly&nbsp;</span><a href="https://en.wikipedia.org/wiki/Confirmation_bias#Biased_search_for_information" target="_blank">search for evidence</a><span style="color:rgb(42, 42, 42)">&nbsp;that confirms their prior beliefs, and&nbsp;</span><a href="https://en.wikipedia.org/wiki/Boomerang_effect_(psychology)" target="_blank">ignore</a><span style="color:rgb(42, 42, 42)">&nbsp;or&nbsp;</span><a href="https://en.wikipedia.org/wiki/Confirmation_bias#Biased_interpretation" target="_blank">discount</a><span style="color:rgb(42, 42, 42)">&nbsp;evidence that contravenes those beliefs; that as a result they are&nbsp;</span><a href="https://en.wikipedia.org/wiki/Overconfidence_effect" target="_blank">systematically overconfident</a><span style="color:rgb(42, 42, 42)">&nbsp;in their opinions; and so on.</span><br /><br /><span style="color:rgb(42, 42, 42)">This movement in psychology had a major impact both&nbsp;</span><a href="https://en.wikipedia.org/wiki/Behavioral_economics" target="_blank">within academia</a><span style="color:rgb(42, 42, 42)">&nbsp;and in the broader culture, fueling the appearance of (</span><a href="https://www.goodreads.com/book/show/357666.A_Mind_of_Its_Own" target="_blank">many</a><span style="color:rgb(42, 42, 42)">)&nbsp;</span><a href="https://smile.amazon.com/Predictably-Irrational-Revised-Expanded-Decisions/dp/0061353248/ref=sr_1_1?keywords=predictably+irrational&amp;qid=1580818921&amp;sr=8-1" target="_blank">popular</a><span style="color:rgb(42, 42, 42)">&nbsp;</span><a href="https://smile.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555/ref=sr_1_3?keywords=thinking+fast+and+slow&amp;qid=1580818954&amp;sr=8-3" target="_blank">irrationalist</a><span style="color:rgb(42, 42, 42)">&nbsp;</span><a href="https://smile.amazon.com/Irrationality-History-Dark-Side-Reason/dp/0691178674/ref=sr_1_1?keywords=irrationality&amp;qid=1580892636&amp;sr=8-1" target="_blank">narratives</a><span style="color:rgb(42, 42, 42)">&nbsp;about human nature.&nbsp; Again, you can see the results for yourself. Using&nbsp;</span><a href="https://books.google.com/ngrams" target="_blank">Google Ngram</a><span style="color:rgb(42, 42, 42)">, graph how often terms like &ldquo;biased&rdquo; and &ldquo;irrationality&rdquo; appeared in print across the 20</span><span style="color:rgb(42, 42, 42)">th</span><span style="color:rgb(42, 42, 42)">&nbsp;century. You&rsquo;ll find, for example, that &ldquo;biased&rdquo; was 18 times more common (percentage-wise) at the dawn of the 21</span><span style="color:rgb(42, 42, 42)">st</span><span style="color:rgb(42, 42, 42)">&nbsp;century than it was at the dawn of the 20</span><span style="color:rgb(42, 42, 42)">th</span><span style="color:rgb(42, 42, 42)">:</span></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/screen-shot-2020-02-07-at-10-20-48-am_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/screen-shot-2020-02-06-at-8-58-07-am_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="wsite-spacer" style="height:10px;"></div>  <div class="paragraph">Upshot: we are living in an age of rampant irrationalism.<br /><br />How&mdash;according to the story that I&rsquo;m telling&mdash;does this rampant irrationalism feed into political demonization?&nbsp; Two steps.<br /><br />Step one is simple: we are now swimming in irrationalist explanations of political disagreement.&nbsp; It is not hard to see how these go. If people tend to reason their way to their preferred conclusions, to search for evidence that confirms their prior beliefs, to ignore opposing evidence, and so on, then there you have it: irrational political polarization falls out as a corollary of the informational choices granted by the modern information age. (Examples: <a href="https://www.salon.com/2007/11/07/sunstein/" target="_blank">Heuvelen 2007</a>, <a href="https://books.google.com/books?hl=en&amp;lr=&amp;id=jEWplxVkEEEC&amp;oi=fnd&amp;pg=PP9&amp;dq=Sunstein+going+to+extreme&amp;ots=GPiBXshx1C&amp;sig=aY1dbhkTlFJklP1KIxfz1JqGN2E" target="_blank">Sunstein 2009</a>, <a href="https://smile.amazon.com/Republic-Divided-Democracy-Social-Media/dp/0691180903/ref=sr_1_3?keywords=%23republic&amp;qid=1580892845&amp;sr=8-3" target="_blank">2017</a>; <a href="https://www.huffpost.com/entry/political-polarization-is-a-psychology-problem_b_5a01dd9ee4b07eb5118255e5?guccounter=1&amp;guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&amp;guce_referrer_sig=AQAAAFaQ8oEzibBubeaz3Ai62tPyY1GWBVrJ4q2UUhqwcEOXgmeOISzrY06hciO9rWeloWrFZ4OUJQRW3v8Fp_ijiv0k6a33hPX7cMlbFx9jLQ_yDHznDPl72s8p6FmKf2dJPRV9ZCvXWZD5bma-ZisvtSoBlBwCGo8zjV6hfzD4Br92" target="_blank">Carmichael 2018</a>; <a href="https://aeon.co/essays/why-its-as-hard-to-escape-an-echo-chamber-as-it-is-to-flee-a-cult" target="_blank">Nguyen 2018</a>; <a href="https://science.sciencemag.org/content/359/6380/1094.short" target="_blank">Lazer et al. 2018</a>; <a href="https://www.bbc.com/future/article/20180416-the-myth-of-the-online-echo-chamber" target="_blank">Robson 2018</a>;&nbsp; <a href="https://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;ved=2ahUKEwihqLOi9bfnAhWCShUIHXPrAYAQFjAAegQIBBAB&amp;url=https%3A%2F%2Fwww.nytimes.com%2F2019%2F01%2F19%2Fopinion%2Fsunday%2Ffake-news.html&amp;usg=AOvVaw1ojY3Lw4w4u9LpKJzL_QI3" target="_blank">Pennycook and Rand 2019</a>; <a href="https://fivethirtyeight.com/features/why-partisans-look-at-the-same-evidence-on-ukraine-and-see-wildly-different-things/" target="_blank">Koerth-Baker 2019</a>)<br /><br /><span>Step two</span> is more subtle. Suppose you read one of these articles claiming that disagreement on political issues is caused by irrational forces. Consider your opinion on some such issue&mdash;say, whether Trump is a good president. (Whatever it is, I&rsquo;m guessing it&rsquo;s quite strong.)&nbsp; Let&rsquo;s imagine that you think he&rsquo;s a bad president. Having come to believe that the disagreement on this question is due to irrationality, what should you <em>now</em> think&mdash;both about Trump, and about the rationality of various attitudes toward him?<br /><br />One thing you clearly should <em>not</em> think is: &ldquo;Trump is a bad president, but I&rsquo;m irrational to believe that.&rdquo; (That would be what philosophers call an &ldquo;<a href="https://onlinelibrary.wiley.com/doi/10.1111/nous.12026" target="_blank">akratic</a>&rdquo; belief&mdash;a paradigm instance of irrationality.) &nbsp;<br /><br />What, then, will you think?&nbsp; You can do one of two things:<ol><li><font color="#2a2a2a">&#8203;</font><font color="#2a2a2a">Acknowledge that you have been irrational, and so give up your belief that Trump is a bad president.</font></li><li>Maintain your belief about Trump, and so think that you have <em>not</em> been irrational in forming it.</li></ol>No one will&mdash;and arguably no one <em>should</em>&mdash;give up such a strong belief, based on such a wide variety of evidence, on the basis of a psychology op-ed they came across in the newspaper. So option (1) is out&mdash;you&rsquo;ll go with option (2).<br /><br />Now what will you think? You&rsquo;ve just been told by an authoritative source that widespread irrationality is the cause of the massive disagreement about whether Trump is a good president.&nbsp; You&rsquo;ve concluded that <em>your</em> opinion on the matter wasn&rsquo;t due to massive irrationality. So whose was?&nbsp; Why, the <em>other side&rsquo;s</em>, of course!&nbsp; You were always puzzled about how they could believe that Trump is a good president, and now you have an explanation: <em>they</em> were the ones who selectively ignored evidence, became overconfident, and all the rest. (Meanwhile, of course, those on the other side&nbsp; are going through exactly parallel reasoning to conclude that <em>your</em> side is the irrational one!)<br /><br />The result? When we come to think that irrationality caused polarization, we thereby come to think that it was the <em>other</em> <em>side&rsquo;s</em> irrationality that did so. &nbsp;<br /><br />And once we view them as irrational, other terms of abuse follow. After all, irrationality breeds immorality: choices that inadvertently lead to bad outcomes are unfortunate but not immoral if they were rationally based on all the available evidence&mdash;but are blameworthy and immoral if they were <em>ir</em>rationally based on a <em>biased</em> view of the evidence.&nbsp; So once we start thinking people are biased, we&rsquo;ll also start thinking that they are &ldquo;immoral&rdquo;, &ldquo;close-minded&rdquo;, and all the rest.<br /><br />In a slogan: <strong>Irrationalism turns polarization into demonization<em>.</em></strong><br /><br />That&rsquo;s the proposal, at least. Obviously it&rsquo;s only part of the story in the rise of political demonization&mdash;and I don&rsquo;t claim to know how big a role it&rsquo;s played. (I haven&rsquo;t found any literature on the connection; if you have, please share it with me!) But given how quickly discussions about political disagreements are linked to irrationalist buzz-words like &ldquo;confirmation bias&rdquo;, &ldquo;motivated reasoning&rdquo;, and the like, I would be shocked if there were no connection at all.<br /><br />As one small piece of evidence, consider this. Daniel Kahneman and Amos Tversky are the <a href="https://www.theguardian.com/books/2015/jul/18/daniel-kahneman-books-interview" target="_blank">founders of the modern irrationalist narrative</a>&mdash;winning a <a href="https://www.apa.org/monitor/dec02/nobel.html" target="_blank">Nobel Prize for doing so</a>, and becoming rock-star scientists known well outside academia. Question: when did their work&mdash;which was largely conducted in the 70s and 80s&mdash;achieve this rock-star status?&nbsp; Again, you can see it for yourself.&nbsp; Use Google Scholar to graph their citations per year. Next to that, graph the trends in political demonization we&rsquo;ve already seen.&nbsp; On each of those graphs, draw a line through 2001. Here&rsquo;s what you get:</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/kahneman_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/tversky_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/20-2-27-demonization-w-line_orig.jpg" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph"><font color="#2a2a2a">Upshot: political demonization began its steady climb at the same time as the influence of the modern irrationalist narratives did.&nbsp; Maybe it wasn&rsquo;t a coincidence.</font><br /><br /><u><span><strong><font color="#2a2a2a" size="4">The Project</font></strong></span></u><br /><font color="#2a2a2a">If the problem is demonization&mdash;not mere polarization&mdash;then part of the solution is to restore political empathy. And if we lack political empathy in part because of rampant irrationalist narratives, then one way to restore it is to question those narratives.<br /><br />That&rsquo;s what I&rsquo;m going to do. I&rsquo;m going to write apologies&mdash;in the <a href="https://www.merriam-webster.com/words-at-play/the-history-of-the-word-apology" target="_blank">archaic sense</a> of the word&mdash;for the strangers we so readily label as &ldquo;biased,&rdquo; &ldquo;irrational,&rdquo; and all the rest. &nbsp;<br /><br />How? By going back to that list of <a href="https://en.wikipedia.org/wiki/List_of_cognitive_biases" target="_blank">200-or-so cognitive biases</a>, and giving it a closer look. I&rsquo;m a philosopher who studies theories and models of what thinking and acting rationally amounts to. There are (<a href="https://plato.stanford.edu/entries/decision-theory/" target="_blank">multiple</a>) <a href="https://plato.stanford.edu/entries/formal-epistemology/" target="_blank">entire subfields</a> devoted to these questions. There are no easy answers. So it&rsquo;s worth taking a closer look at how we all came to take it for granted that people are irrational, biased, and (let&rsquo;s be frank) <em>dumb</em>. &nbsp;<br /><br />I&rsquo;m going to be tackling this project from different angles. This blog will be relatively exploratory; my professional work will (try to) be rigorous and well-researched; my other public writings will (try to) distill that work into an accessible form. Sometimes (as in this post) I&rsquo;ll make a big-picture argument; more often, I&rsquo;ll take an in-depth look at some particular empirical finding that has been taken to show irrationality. Stay tuned for topics like: <a href="https://en.wikipedia.org/wiki/Overconfidence_effect" target="_blank">overconfidence</a>, <a href="https://en.wikipedia.org/wiki/Confirmation_bias" target="_blank">confirmation bias</a>, <a href="https://en.wikipedia.org/wiki/Conjunction_fallacy" target="_blank">the conjunction fallacy</a>, <a href="https://en.wikipedia.org/wiki/Group_polarization" target="_blank">group polarization</a>, <a href="https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect" target="_blank">the Dunning-Kruger effect</a>, <a href="https://en.wikipedia.org/wiki/Base_rate_fallacy" target="_blank">the base-rate fallacy</a>, <a href="https://en.wikipedia.org/wiki/Cognitive_dissonance" target="_blank">cognitive dissonance</a>, and so on. &nbsp;<br /><br />As we&rsquo;ll see, the bases for the claims that these phenomena demonstrate irrationality are sometimes remarkably shaky. Overgeneralizing a bit: often psychologists observe some interesting phenomenon, show how it could result from irrational processes, but move quickly over a crucial step&mdash;namely, offering a realistic model of what a <em>rational</em> person would think or do in the relevant situation, and an explanation of why it would be different. The reason for this is no mystery: psychologists do not spend most of their time developing realistic models of rational belief and action. Philosophers do. That&rsquo;s why philosophers have things to contribute here. Because&mdash;as we&rsquo;ll see&mdash;when we do the work of deploying such realistic models, often the supposedly irrational phenomenon turns out to be what we should expect from perfectly rational people.</font><br /><br /><u><span><strong><font color="#2a2a2a" size="4">The Hope</font></strong></span></u><br /><font color="#2a2a2a">In starting this project, my hope is to contribute to a different narrative about human nature&mdash;one often voiced by a <a href="https://science.sciencemag.org/content/331/6022/1279.full" target="_blank">different group of cognitive scientists</a>. These are the scientists who attempt to build machines that can duplicate even the simplest actions and inferences we humans perform every day, and discover just how fantastically difficult it is to do so. It turns out that most of the problems we solve every waking minute are <a href="https://www.amazon.co.uk/Algorithms-Live-Computer-Science-Decisions-ebook/dp/B015DLA0LE" target="_blank">objectively intractable</a>. Every time you recognize a scene, walk across a street, or reply to a question, you perform a computational and engineering feat that <a href="https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/building-machines-that-learn-and-think-like-people/A9535B1D745A0377E16C590E14B94993" target="_blank">the most sophisticated computers cannot</a>. I would say that the computations required for such feats make the calculations involved in the most sophisticated rational model look like child&rsquo;s play&mdash;but that would be forgetting that child&rsquo;s play itself involves those same computational feats. In short: rather than resulting from a grab-bag of simple heuristics, our everyday reasoning is the result of the&nbsp;<a href="https://science.sciencemag.org/content/350/6256/42/tab-pdf" target="_blank">most computationally sophisticated system</a> in the known universe running on all cylinders.<br /><br />I find this narrative compelling&mdash;far more so than the irrationalist one that (I&rsquo;ll argue) it&rsquo;s in competition with. That is why I am starting this project.<br /><br />I&rsquo;m doing so with a bit of trepidation, as I&rsquo;m a philosopher straying outside my comfort zone into other fields. No doubt sometimes I&rsquo;ll mess it up. (When I do, please tell me!) But I think it&rsquo;s worth the risk.<br /><br />Partly because I have things to say.&nbsp; But mostly because I hope to convince more people that there are still things here <em>worth</em> saying. How irrational are we, really?&nbsp; I think the question is far from settled. And part of what I&rsquo;m going to be arguing is that it is a question that is sufficiently subtle to appear on the pages of the <em>The Philosophical Review</em>, and at the same time sufficiently consequential to appear on the pages of <em>The New York Times</em>. &nbsp;I hope you&rsquo;ll come to agree.<br /><br /><br />What next?</font><br /><font color="#2a2a2a"><strong>If you&rsquo;re an interested philosopher</strong>, reach out to me. There are a variety of people working on these topics, and I&rsquo;m hoping to help build and grow that network.</font><br /><font color="#2a2a2a"><strong>If you&rsquo;re an interested social scientist</strong>, reach out to me. Whether it&rsquo;s to inform me of literature I don&rsquo;t know (good!), to correct my mistakes about it (great!), or to propose something about it we might explore together (wonderful!), I would love to hear from you.</font><br /><font color="#2a2a2a"><strong>If you&rsquo;re just interested</strong>, sign up for <a href="https://kevindorst.substack.com/about" target="_blank">the newsletter</a>&nbsp;or <a href="https://twitter.com/kevin_dorst" target="_blank">follow me on Twitter</a>&nbsp;for updates; check out <a href="https://www.kevindorst.com/stranger_apologies/overconfidence" target="_blank">this detailed piece</a> scrutinizing the putative evidence for overconfidence, or <a href="https://phenomenalworld.org/analysis/why-rational-people-polarize" target="_blank">this bigger-picture piece</a> on the possibility of rational polarization; and stay tuned for new posts in the coming weeks and months.<br /><br /><br />PS. Thanks to <a href="http://web.mit.edu/cdg/www/" target="_blank">Cosmo Grant</a>, <a href="http://www.rachelelizabethfraser.com" target="_blank">Rachel Fraser</a>, <a href="http://www.ksetiya.net" target="_blank">Kieran Setiya</a>, and especially&nbsp;<a href="https://www.liamkofibright.com" target="_blank">Liam Kofi Bright</a> for much guidance with writing this post and starting this project. It&nbsp;should go without saying that any mistakes (here, or to come) are my own.</font></div>]]></content:encoded></item><item><title><![CDATA[How (Not) to Test for Overconfidence]]></title><link><![CDATA[https://www.kevindorst.com/stranger_apologies/overconfidence]]></link><comments><![CDATA[https://www.kevindorst.com/stranger_apologies/overconfidence#comments]]></comments><pubDate>Tue, 18 Feb 2020 05:00:00 GMT</pubDate><category><![CDATA[Overconfidence]]></category><guid isPermaLink="false">https://www.kevindorst.com/stranger_apologies/overconfidence</guid><description><![CDATA[2400 words; 10 minute read.A MistakeDo people tend to be overconfident? Let&rsquo;s find out.&nbsp; For each question, select your answer, and then rate your confidence in that answer on a 50&ndash;100% scale:Which has a bigger population: Rome or Madrid?Which came first: the printing press or Machiavelli&rsquo;s The Prince?Which is longer: an American football field, or a home-run lap? If you&rsquo;re like most people, your responses will have two features.      First, the test is difficult: it [...] ]]></description><content:encoded><![CDATA[<div class="paragraph"><span><font color="#818181" size="2">2400 words; 10 minute read.</font><br /><br /><strong><u><font size="4">A Mistake</font></u></strong></span><br /><span>Do people tend to be overconfident? Let&rsquo;s find out.&nbsp; For each question, select your answer, and then rate your confidence in that answer on a 50&ndash;100% scale:</span><ol><li>Which has a bigger population: Rome or Madrid?</li><li>Which came first: the printing press or Machiavelli&rsquo;s <em>The Prince</em>?</li><li>Which is longer: an American football field, or a home-run lap?</li></ol> If you&rsquo;re like most people, your responses will have two features.</div>  <div>  <!--BLOG_SUMMARY_END--></div>  <div class="paragraph">First, the test is difficult: it&rsquo;s likely that only one or two of your answers are correct.&nbsp; Second, your confidence in your answers probably does not reflect that degree of difficulty: I&rsquo;ve given the test to 50 people, and their average confidence in their answers was 64%&mdash;yet only <em>44%</em> of those answers were correct.&nbsp;<br /><br />When we look closer, this discrepancy between confidence and accuracy only becomes more striking. Question: of the claims people were 90%-confident in, what proportion were true?&nbsp; It is natural to think that if they are assessing their evidence properly, the answer will be &ldquo;90%&rdquo;. More generally, we might expect rational people to be <strong>calibrated</strong>: exactly 50% of the claims that they&rsquo;re 50%-confident in are true; exactly 60% of the claims that they&rsquo;re 60%-confident in are true, etc.<br /><br />Are people calibrated on this test?&nbsp; Not even close.&nbsp; For instance, of all the claims people were 90% confident in, only 63% were true.&nbsp; We can represent this and other discrepancies in a graph&mdash;called a <strong>calibration curve</strong>&mdash;where the <em>x</em>-axis represents degrees of confidence and the <em>y</em>-axis represents the proportion of the answers at that degree of confidence that were true. For instance, the point (0.9, 0.63) represents the fact that of all the claims that people were 90%-confident in, 63% were true.<br /><br />&#8203;If someone is calibrated, this graph will be a diagonal line. How does that line compare to people&rsquo;s actual calibration curve on my survey? Take a look:</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/survey-calcurve.jpg?1581866103" alt="Picture" style="width:394;max-width:100%" /> </a> <div style="display:block;font-size:90%">Fig. 1: Calibration curve for my survey.</div> </div></div>  <div class="paragraph">Yikes.&nbsp;<br /><br />&#8203;What to make of results like that?&nbsp; Most psychologists take them to provide evidence that people are systematically overconfident in their opinions (summaries: <a href="https://www.cambridge.org/core/books/judgment-under-uncertainty/calibration-of-probabilities-the-state-of-the-art-to-1980/9F0C9EC2997AEEB6DDDB304C2F935A16" target="_blank">here</a>, <a href="https://scholar.google.com/scholar?hl=en&amp;as_sdt=0%2C5&amp;q=hoffrage+2004+overconfidence&amp;btnG=" target="_blank">here</a>, and <a href="https://scholar.google.com/scholar?hl=en&amp;as_sdt=0%2C5&amp;q=Glaser+and+Weber+2010+overconfidence&amp;btnG=" target="_blank">here</a>). After all, if people are correct in their opinions far less often than they expect to be, then it's natural to infer that they are irrationally <em>over</em>confident in those opinions. &nbsp;<br /><br /><strong>That inference</strong>&mdash;from &ldquo;you are correct less often than you expect&rdquo; to &ldquo;you are irrationally overconfident&rdquo;&mdash;forms the basis of the (<a href="https://en.wikipedia.org/wiki/Overconfidence_effect" target="_blank">much</a>) <a href="https://www.psychologytoday.com/gb/blog/the-art-thinking-clearly/201306/the-overconfidence-effect" target="_blank">repeated</a> <a href="https://blogs.scientificamerican.com/observations/how-to-resist-the-lure-of-overconfidence/" target="_blank">claim</a> that people are systematically overconfident in their opinions. That claim has been a key component in the broader&nbsp;<a href="https://www.kevindorst.com/stranger_apologies/plea_for_political_empathy" target="_blank">cultural narrative of irrationality</a>, and has been cited as a driver of a raft of societal ills&mdash;including <a href="https://journals.sagepub.com/doi/full/10.1111/j.1529-1006.2004.00018.x" target="_blank">bad health habits</a>, <a href="https://pubs.aeaweb.org/doi/pdf/10.1257/aer.89.1.306" target="_blank">business failures</a>, <a href="https://pubs.aeaweb.org/doi/pdf/10.1257/aer.89.5.1279" target="_blank">market bubbles and crashes</a>, <a href="https://www.aeaweb.org/articles?id=10.1257/aer.20130921" target="_blank">political polarization</a>, <a href="https://journals.sagepub.com/doi/abs/10.1177/0963721418817755" target="_blank">intolerance</a>, and even <a href="https://books.google.co.uk/books?hl=en&amp;lr=&amp;id=Ccu7OhgusaAC&amp;oi=fnd&amp;pg=PP8&amp;dq=johnson+overconfidence+and+war&amp;ots=j6lFMVVyTS&amp;sig=mLReUPukpbEbRnuqH4VrwcuFNhY&amp;redir_esc=y#v=onepage&amp;q&amp;f=false" target="_blank">wars</a>.&nbsp; As one textbook puts it: &ldquo;No problem in judgment and decision-making is more prevalent and more potentially catastrophic than overconfidence&rdquo; (<a href="https://psycnet.apa.org/record/1993-97429-000" target="_blank"><span>Plous 1993</span></a>, 213).&nbsp;<br /><br /><span><strong>But the inference rests on a mistake.</strong><br /><br />More precisely, it is a mistake to infer simply from a calibration curve like the one I showed you to the conclusion that subjects were overconfident in their opinions.&nbsp; Since this inference undergirds the general conclusion about overconfidence, it is not at all clear that we have genuine evidence that people tend to be overconfident.<br /><br />Now, I think there <em>is</em> something importantly right about the inference (for the full story, see the <a href="https://philpapers.org/rec/DOROIO" target="_blank">paper this post is based on</a>).&nbsp; But here I&rsquo;m going to focus on what&rsquo;s wrong with it&mdash;on why the studies we&rsquo;ve been running do not, as they are currently constructed, provide a genuine test of overconfidence.</span><br /><br /><span><u><strong><font size="4">How Not</font></strong></u><br />What exactly are the results of those studies?<br /><br />The most widely-cited finding is what&rsquo;s known as the &ldquo;hard-easy effect&ldquo;. We can categorize a test based on its <strong>hit rate</strong>&mdash;the proportion of all subjects&rsquo; answers that are true. Say that a (binary-choice) test is <em>hard</em> if it has a hit rate of less than 75%, and <em>easy</em> if it has a hit rate of at least 75%. The <strong>hard-easy effect</strong> </span>is that on hard tests people&rsquo;s confidence tends exceed the proportion true (hence their calibration curve is to the right of the diagonal line&ndash;&ndash;see the bottom lines in the below graph), whereas on easy tests the proportion true often exceeds their confidence (hence their calibration curve is to the left of the diagonal line&ndash;&ndash;see the top lines):</div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/screen-shot-2020-02-16-at-2-59-36-pm.png?1581865195" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%">Fig. 2: Calibration curves for easy tests (top curves) and hard tests (bottom curves).   Figure from Lichtenstein, Sarah, Fischhoff, Baruch, and Phillips, Lawrence D., (1982). &lsquo;Calibration of probabilities: The state of the art to 1980&rsquo;. In Daniel Kahneman, Paul Slovic, and Amos Tversky, eds., Judgment under Uncertainty, 306&ndash;334. Cambridge University Press. Reproduced with permission of the Licensor through PLSclear. </div> </div></div>  <div class="paragraph" style="text-align:justify;"><span>The standard interpretation of this effect is that people fail to take difficulty into account, and therefore become overconfident on hard tests and under-confident on easy tests.<br /><br />Is that a sensible interpretation?&nbsp; To evaluate it, we need to be clear on what it means to say that you are &ldquo;overconfident.&rdquo; It does <em>not</em> simply mean that your average confidence in your opinions exceeds the proportion of those opinions there are true. For example: even though people&rsquo;s average confidence on my test was 64% and the proportion true was 44%, it's still possible that they were fully rational.<br /><br />Consider an example. I have a coin that is 60% biased toward heads, and I&rsquo;m about to it 10 times.&nbsp; How confident are you, of each toss, that it&rsquo;ll land heads on that toss? 60%, no doubt.&nbsp; Now I toss the coin&hellip; and (surprisingly) it turns out it landed heads only 3 of 10 times&mdash;so only 30% of the opinions that you were 60% confident in were true.&nbsp; Does that mean you were overconfident?&nbsp; Of course not&mdash;your 60% confidence was perfectly rational given your evidence about the coin; it just so happened that you got unlucky with how the coin landed.<br /><br />The point? There is a difference between rational degrees of confidence and calibrated degrees of confidence.&nbsp; In saying that you are &ldquo;overconfident&rdquo;, we are presupposing that there is some degree of confidence&ndash;&ndash;<em><strong>R</strong></em>&ndash;&ndash;that it would be <em>R</em>ational you to have, and that your actual opinion is more extreme than <em>R</em>.&nbsp;<br /><br />Once we make this conceptual point, it&rsquo;s clear that there is no necessary connection between the (average) rational confidence on a test and the proportion of claims on the test that are true.&nbsp; In fact, by manipulating the construction of the test, we can distort that connection as much as we like.<br /><br />Consider Calvin.&nbsp; Suppose that, given his evidence (memories, general knowledge, etc.) the rational confidence for him to have in the following claims&mdash;i.e. <em>R</em>&mdash;is as follows:</span><ol><li>Rome is bigger than Madrid: 60%.</li><li><span>London is bigger than New York City: 50%.</span></li><li><span>The printing press came before Machiavelli&rsquo;s <em>The Prince</em>: 60%.</span></li><li><span>A football field is longer than a home-run lap: 70%.</span></li><li><span>A bowling alley is longer than a half-court shot: 80%.</span></li></ol> <span>Then if we gave him a test with these claims on it, his average rational confidence would be 64% and the proportion of truths would be 3 of 5, or 60%&mdash;not a bad calibration score. But now suppose that before we give him the test, we first remove two of the true claims: the second and the last. Then we&rsquo;ve made a new test for which his average rational confidence is 63%, but the proportion true is only 33%&mdash;we&rsquo;ve (quite easily) made him rationally miscalibrated!<br /><br />This procedure is fully general. Take any test you like, and suppose that Calvin&rsquo;s rational degrees of confidence would be calibrated on it. Then as we start to remove true claims from the test, his rational confidence will start to exceed the proportion true&mdash;both will tend toward 0%, but the latter will do so much faster.&nbsp; Conversely, if we start to remove <em>falsehoods</em> from the test, both is&nbsp;confidence and&nbsp;the proportion true will tend toward 100%, but again the latter will do so much faster. Either way, he&rsquo;ll become substantially miscalibrated.<br /><br /><strong>First upshot: </strong>Miscalibration is not good evidence for irrationality, since&nbsp;fully rational people will often be miscalibrated&ndash;&ndash;especially on tests that contain a high (or low) proportion of truths.&nbsp;<br /><br /><strong>Second upshot:</strong> Therefore, we cannot test whether people are overconfident by simply testing whether their calibration curves deviate from the diagonal-line calibrated one, as is standardly done.<br /><br />How, then, can we test for overconfidence?<br /><br /><u><strong><font size="4">How To</font></strong></u><br />To do so, we need to know what to expect a <em>rational</em> person&rsquo;s calibration curve to look like on a test with a given hit rate.<br /><br />Here&rsquo;s a helpful analogy. Suppose we have a variety of coins of different biases: some are 90% likely to land heads on a given toss, others 80%, and so on.&nbsp; Suppose we&rsquo;ve tossed these coins lots of times and written down the bias of the coin on one side of a piece of paper, and the way it landed on the other.&nbsp; Now we collect a sample of these slips of paper, show the bias-indicating side of each slip to our subject Bianca, and have her announce her guess as to whether the coin landed heads or tails along with her confidence in that guess.&nbsp; After doing this, we plot her calibration curve.<br /><br />The analogy: the bias of the coin is like the evidence that Calvin has about the questions on his test; the fact that Bianca knows the biases is analogous to Calvin knowing the rational confidence <em>R</em> for him to have in each claim; the ways of collecting the slips of paper correspond to different ways of constructing the test.<br /><br />Finally, note that it is not realistic to suppose that a person like Calvin is <em>always</em> perfectly rational in setting his degrees of confidence&mdash;it is far more plausible to suppose that he tends to be rational, but with some random error. To make Bianca&rsquo;s case analogous, let&rsquo;s suppose that the bias of the coin is indicated on the slip of paper by an unmarked slider, like this one which indicates a 60% bias toward heads:</span></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/screen-shot-2020-02-16-at-3-12-37-pm.png?1581865971" alt="Picture" style="width:520;max-width:100%" /> </a> <div style="display:block;font-size:90%"></div> </div></div>  <div class="paragraph" style="text-align:justify;"><span>So given just the quick glance she&rsquo;s afforded, Bianca is pretty good (but not perfect) at recognizing the bias of the coin.<br /><br />Given this setup, what should we expect Bianca&rsquo;s (and, by analogy, a rational subject like Calvin&rsquo;s) calibration curve to look like on various tests?<br /><br />We can simulate it. &nbsp;<br /><br />Suppose the slips of paper are randomly drawn from all tosses of the coins. Given that, what should we expect Bianca&rsquo;s calibration curve to look like on (1) a randomly selected test, (2) a random test that turns out to be <em>easy</em> (high hit rate), and (3) a random test that turns out to be <em>hard</em> (low hit rate)? Here are the results from a simulation of 50,000 trials, averaging her calibration curves within each category (1)&ndash;(3):</span></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/simulations-he.jpg?1581866114" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%">Fig. 3: Simulation from random tests.</div> </div></div>  <div class="paragraph" style="text-align:justify;"><span>As you can see, over all trials (green line) Bianca is roughly calibrated&mdash;though at the end-points of the scale, the curve is tilted due to &ldquo;<a href="https://psycnet.apa.org/record/2000-15248-008" target="_blank">scale-end effects</a>&rdquo; (her errors in identifying the bias can only go in one direction). Nevertheless, we see a realistic hard-easy effect: of the hard tests (red line), Bianca&rsquo;s confidence exceeds the proportion of truths; of the easy tests (orange line), her confidence is exceeded by it (except at the top end).&nbsp;</span><br /><br /><span><strong>Upshot: </strong>Since Bianca&rsquo;s case is analogous to Calvin&rsquo;s, we can see that <em>rational</em> people would exhibit a hard-easy effect very similar to the one that&rsquo;s empirically observed. (Compare the orange and red lines of Figure 3 to the top and bottom lines of Figure 2.)<br /><br />Why?&nbsp; Because when we focus in on test with low (or high) hit rates, there is a <strong>selection effect</strong> that Bianca cannot account for.&nbsp; Suppose we take a test that happened to have a low hit rate. Why was it low? One explanation is that our slips of paper happened to correspond to coins with moderate biases (close to 50%), and so were harder to predict.&nbsp; A different explanation is that even amongst the coins with a strong bias, fewer of those landed heads than you would normally expect (perhaps only 60% of the 70%-heads-biased coins landed heads).&nbsp; In any given test with a low hit rate, it&rsquo;s likely that both of these factors are at play.&nbsp; And although Bianca can be sensitive to the first factor by recognizing that most of the biases of the coins are moderate, she cannot be sensitive to the second factor.&nbsp; As a result, as the hit rate becomes more extreme, she becomes less calibrated. &nbsp;<br /><br />For example, the average hit rate over all trials was 75% and Bianca&rsquo;s average confidence was 75%; but amongst tests on which the hit rate was 60%, Bianca&rsquo;s average confidence only fell to 73.3%; and amongst tests on which the hit rate was 85%, Bianca&rsquo;s average confidence only rose to 76%.<br /><br />Likewise, of course, for Calvin: even rational people facing randomly selected test questions will display a form of the hard-easy effect.<br /><br />With this in mind, let&rsquo;s revisit <em>my</em> test. If we run this simulation with the observed hit rate of 44% and graph the predicted rational calibration curve (green line) against the empirically observed on (orange line), here&rsquo;s what we get:</span></div>  <div><div class="wsite-image wsite-image-border-none " style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"> <a> <img src="https://www.kevindorst.com/uploads/8/8/1/7/88177244/published/randsim.jpeg?1581866212" alt="Picture" style="width:auto;max-width:100%" /> </a> <div style="display:block;font-size:90%">Fig. 4: Predicted rational calibration curve on my survey</div> </div></div>  <div class="paragraph"><strong>Upshot:</strong> as I said, the observed miscalibration on my test is <em>not</em> evidence that my subjects were irrational&ndash;&ndash;in fact, a rational person should be expected to have a very similar calibration curve.<br /><br />What to make of all this?&nbsp; I think it means that the current methodology for testing for overconfidence is untenable. This methodology involves assuming that a rational person will be calibrated on the test, and then seeing whether real people deviate from this. As we&rsquo;ve seen, that is a mistake: even&nbsp;<em>rational</em> people will be systematically miscalibrated on tests that are hard or easy. (Similar morals apply, I think, to other empirical effects&ndash;&ndash;see the <a href="https://philpapers.org/rec/DOROIO" target="_blank">full paper</a>.) Thus to make a genuine test of overconfidence we must first predict the rational deviations from calibration on our test, and then compare real people&rsquo;s performance to <em>that</em> prediction. &nbsp;<br /><br />As we&rsquo;ve seen, doing so has the potential to reverse our interpretation of the data. Perhaps we've been too confident that people tend to be overconfident.<br /><br /><br />What next?<br /><strong>If you want more of the details</strong>, check out the full paper <a href="https://philpapers.org/rec/DOROIO" target="_blank">here</a>.<br /><strong>If you&rsquo;re an academic working on these topics </strong>and would like to chat&mdash;including to point me to parts of the literature that I may have misunderstood, or simply missed&mdash;please reach out to me!<br /><strong>If you want to play with the simulators yourself</strong>, here are two I&rsquo;ve posted online. This <a href="https://www.wolframcloud.com/obj/493e835d-6254-4483-9276-83ca0438cc1c" target="_blank">subject-matter simulator</a> simulates both fully random questions, as well as those pulled from subject-matters that have random degrees of misleadingness. This <a href="https://www.wolframcloud.com/obj/59f79e98-d389-49d6-876d-f7ad7f877939" target="_blank">scrutinized-question simulator</a> simulates a test where questions are scrutinized for their difficulty before it is decided whether they are included. For more details about these simulations, see &sect;5 of the <a href="https://philpapers.org/rec/DOROIO" target="_blank">full paper</a>.<br /><br />PS. The answers are: Madrid; the printing press; a home-run lap.<br />PPS. Thanks to <a href="http://web.mit.edu/cdg/www/" target="_blank">Cosmo Grant</a>&nbsp;for feedback on this blog post, and many others for feedback on the full paper.</div>]]></content:encoded></item></channel></rss>