This is a guest post from Jake Quilty-Dunn (Oxford / Washington University), who has an interestingly different take on the question of rationality than I do. This post is based on a larger project; check out the full paper here.
(2500 words; 12 minute read.)
Is it really possible that people tend to be rational?
On the one hand, Kevin and others who favor “rational analysis” (including Anderson, Marr, and leading figures in the recent surge of Bayesian cognitive science) have made the theoretical point that the mind evolved to solve problems. Looking at how well it solves those problems—in perception, motor control, language acquisition, sentence parsing, and elsewhere—it’s hard not to be impressed. You might then extrapolate to the acquisition and updating of beliefs and suppose those processes ought to be optimal as well.
On the other hand, many of us would like simply to point to our experience with other human beings (and, in moments of brutal honesty, with ourselves). That experience seems on its face to reveal a litany of biases and irrational reactions to circumstances, generating not only petty personal problems but even larger social ills.
A prominent research program in social psychology has lent scientific legitimacy to this pessimistic picture of human nature. People irrationally ignore base rates, or form overconfident judgments of their own performance, or harbor implicit biases, or fall prey to various provable “fallacies.” Researchers have been eager to enumerate biases and heuristics and construct a taxonomy of irrational thought patterns. One might optimistically think that, if we can only learn to correct for these design flaws, our species could have some hope of transcending our fallen cognitive nature.
But, to return to the rational analysis approach, there’s something strange about this picture of human cognition as irrational: if we’re so good at perceiving, parsing, acting, and other complicated cognitive tasks, why should we be so bad at reasoning? Furthermore, is the empirical evidence for irrationality really that strong?
Kevin and others (including other contributors to this blog) have provided compelling critiques of specific claims made by cognitive scientists and philosophers in the broadly “irrationalist” tradition. These critiques often target a particular style of argument for irrationalism. Researchers may have the impression that a certain pattern of judgment, such as metacognitive overconfidence, is obviously irrational. Then, when we find results suggesting that people exhibit that pattern, we add it to the pile of evidence supporting a dismal view of human rationality. But this style of evidence-gathering is vulnerable to rational reconstruction—if we think again about what rationality requires, perhaps overconfidence, framing effects, hindsight bias, the gambler’s fallacy, the sunk cost fallacy, etc., are not so irrational after all.
Disagreements of this sort typically concern how to model a given input-output function: can we imagine a rational/irrational creature that would exhibit just this sort of cognitive response (output) to just this sort of evidence (input)? This is a productive debate, and rationalists are providing useful challenges to the default status claimed by irrationalist models. But I don’t think the evidence at issue here provides the strongest case for irrationalism.
Instead, I want to focus on cognitive processes underlying belief updating. Rather than observing an input-output function and debating over whether to model it rationally or irrationally, it would be nice if we had some evidence about what the actual causal processes are that underwrite a pattern of belief change. We could then judge whether that process looks to have the rationally optimal character of vision and motor control (though there may be counterexamples even in vision), or whether it has another, rationally degenerate purpose. So one task for the irrationalist is to specify a non-rational but adaptive purpose for belief updating, and then find evidence of cognitive mechanisms that seem geared toward that purpose rather than a rationalistic one.
Fortunately (or unfortunately, depending on your allegiance), I think there is evidence of just this sort concerning rationalization. I’ll focus on the generation and reduction of cognitive dissonance. This focus is partly because of space limitations and partly because dissonance is arguably the strongest and best-studied example of a cognitive process underlying rationalization that has an expressly irrational character and purpose.
The basics: When people encounter evidence that contradicts one of their strongly held beliefs, they experience an unpleasant feeling of cognitive dissonance, a sort of psychological pain. Cognitive dissonance has not only a negative affect—the unpleasant feeling—but also a motivational force, akin to the motivational force to eat that accompanies hunger. But cognitive dissonance doesn’t motivate us to eat (except when it does––meat-eating is often sustained through dissonance reduction); it motivates us to reconcile the contradiction in a way that palliates the negative feeling of dissonance.
Here’s an example. Suppose a kindly experimenter asks you to twiddle a knob for twenty minutes. After completing this boring task, the experimenter then tells you there’s another participant waiting outside, and asks whether you would please tell them that, contrary to the experience you just had, the task was actually fun. If you’re lucky, she offers you 100 dollars to do this, and if you’re unlucky, she offers you 1 dollar. After you tell the participant that the task was fun, you’re then asked for your true beliefs about how fun the task was.
Here’s the result: if you were paid 100 dollars, you think the task was boring and that you lied when you said it was fun. But if you were paid 1 dollar, you think the task was fun and that you were accurately relaying information to the other participant. (This description is adapted from a classic experiment.)
Why should getting paid less money make you think the task was more fun? Since the task was really boring, acknowledging that fact means acknowledging that you lied when you said it was fun. If you were paid 100 dollars, it’s easy to admit that you’d tell a harmless lie for that much money. But if you were only paid 1 dollar, it’s harder to explain your own behavior without facing some harsh truths. Is it that you’re petty enough that you’ll lie for 1 dollar? Or suggestible enough that you’ll lie for 1 dollar as long as an authority figure requests it? Or oblivious enough that you were incorrect about your own judgment about the task?
There is an irrationalist explanation of effects like this. When faced with a contradiction between your beliefs about yourself––I’m good, I’m rational, I’m competent—and your knowledge of your own behavior––I just told a stranger that this boring task was fun because a psychologist offered me 1 dollar—you experience cognitive dissonance. (This is the generation of dissonance.) This negative feeling motivates you to push your attitudes around in a way that alleviates psychological discomfort. Often enough, the easiest way to do so is simply to change your belief so that your behavior accords with it. In this case, you change your judgment about the task: the task was really fun after all, so when you told that stranger that the task was fun, you were just honestly reporting your beliefs! No obvious negative implications about your personality follow, so you can rest easy. (This is the reduction of dissonance.)
This explanation might strike you as possible, but thoroughly non-obvious and maybe even overly complex. To be sure, alternative rational reconstructions abound. For instance, Daryl Bem (before his forays into ESP) argued that what’s really going on in experiments like this is that people don’t have direct access to their attitudes and must infer them from their behavior. So, since I told this person the task was fun and 1 dollar is not enough to make me lie, I must have done it because I really believe the task is fun.
In a forthcoming paper called “Rationalization is Rational”, Fiery Cushman argues that we rationalize in order to work out what it would be most rational for us to believe given our behavior, thereby facilitating a form of learning he calls “representational exchange,” wherein we acquire conscious propositional attitudes that repackage the murkier unconscious sources of our original action in a form we can consciously act on. Even if I didn’t say the task was fun because I actually believed it, I rationally ought to adopt that belief anyway because it makes the most sense out of my previous action and is therefore likely to afford beneficial actions in the future.
Now it looks like we’ve ended up in the same position as ever: we have a pattern of belief change in response to evidence, and we can model it as rational or irrational. Fortunately, however, the irrationalist model we get from dissonance theory affords more concrete claims about the causal processes underlying this belief change. Specifically, according to the version of dissonance theory I articulated above, rationalization is self-serving. What I mean by this is twofold:
Claims 1 and 2 also give us some empirical predictions: first, rationalization should be mediated by negative affect, and second, rationalization should be responsive to self-esteem. The evidence supports these claims.
Evidence for Claim #1: When people are in dissonance experiments like the one mentioned above, they report feeling bad. Even more importantly, they show implicit marks of negative affect, such as measurable changes in electrical activity on the skin that signal stress as well as neural activation in regions linked to negative emotion. If these bad feelings are driving the belief change, then mitigating the feelings or making subjects think the feelings derive from an irrelevant source should mitigate the belief change as well. And that’s just what happens: drinking alcohol after dissonance is induced minimizes rationalization, and giving people a sugar pill and telling them it will make them uncomfortable makes them misattribute the negative affect of dissonance to the pill, and therefore makes them less likely to shift their attitudes.
Evidence for Claim #2: The evidence here is more indirect. But a simple prediction is that lowering people’s self-esteem (e.g., by giving them a bogus personality assessment with harsh results) should minimize dissonance-based rationalization. This prediction turns out to be true. And if, after dissonance is generated, you’re reminded of some of your good qualities (which are strictly irrelevant to the task at hand), rationalization is again minimized.
All this evidence suggests that the way we shift our beliefs in response to evidence of our own failings has an irrational character. We first feel bad in response to the attack on our self-esteem, and we then respond by unconsciously shifting our attitudes around in a way that alleviates the negative feeling and preserves our image of ourselves as good, rational, competent people.
My goal here is not to argue that the rational-analysis perspective has no way of modeling these phenomena (though Eric Mandelbaum makes that argument). Instead of testing the limits of rationalistic modeling, we can look “under the hood” at the actual processes driving belief change. Once we do, we find these effects are driven by negative affect and self-esteem. Irrationalists (of a certain sort) have a readymade explanation: we often update beliefs through rationalization, which is geared toward eliminating negative affect and has the purpose of protecting our image of ourselves and thereby maintaining stable motivation and avoiding depression, anxiety, and other maladaptive states of mind. A Bayesian might be able to rationally reconstruct the belief-updating patterns observed in dissonance experiments. But the challenge for them presented by dissonance-based rationalization is not met merely by developing a rationalistic model. Instead, they need also to provide an equally good explanation for why these changes in belief are (i) predictably motivated by negative affect and (ii) predictably responsive to self-esteem.
Now I want to circle back to the question of the “purpose” of belief updating. Kevin’s comparison to visual perception raises the challenge: why should we see rational degeneracy in reasoning when we (apparently) see rational optimality in vision and elsewhere? I think there is a good answer in the case of dissonance-based rationalization.
Having persistent negative beliefs about yourself––I’m a bad person, I’m stupid, I’m incompetent—is plainly not conducive to mental health. Negative thoughts about the self, and the depression and lack of motivation they engender, are not adaptive. But it doesn’t take a scientific research program to show that human behavior routinely falls short of our ideals; our species is immoral, irrational, and incompetent as a matter of course. A perfectly optimal Bayesian updater saddled with human flaws is therefore at risk of adopting negative self-appraisals in ways that hamper healthy emotional functioning. Thus the rationalistic model of cognition itself, if true, creates a motivational problem that needs to be solved through non-rationalistic means.
The brand of irrationalism that I favor (owing to Gilbert and Mandelbaum) posits a psychological immune system to address this design flaw. Dissonance forms one mechanism in this immune system, and its purpose is to push our beliefs in a direction that allows us to maintain a positive self-image despite evidence to the contrary, and thereby keep us emotionally stable and functional. To fulfil this function, our minds generate cognitive dissonance when our otherwise rational belief-updating processes deliver conclusions that threaten our sense of self-worth, which motivates us to rearrange our beliefs in ways that defang the threatening evidence.
I doubt that this picture of cognition will settle the dispute between rationalistic and irrationalistic models of belief change. But it does put pressure on purely rationalistic models, and it moves the debate past competing models of input-output functions toward competing theories of the structure and function of causal cognitive processes.
It matters whether it’s true that rationalization has the structure I’ve described, and not just for theoretical purposes. Cognitive dissonance is sensitive to group membership—when our group is criticized, we experience dissonance and feel compelled to rationalize it away. This makes dissonance a plausible mechanism contributing to what Charles Mills calls “white ignorance”, including (e.g.) the persistent ignorance of white Americans of the continuing history of racism in the United States. Dissonance pushes people to preserve tribal allegiance as well as seek self-exonerating narratives when their morality is in question (as seen in the meat-eating literature). When these two factors are combined, rampant rationalization is likely.
Changing these tendencies requires understanding the psychological processes responsible for them. Unfortunately, doing so may force us to give up on a rationalistic picture of human nature.
If you want to hear more about the (potential) adaptive role of irrationality, check out Lisa Bortolloti's The Epistemic Innocence of Irrational Beliefs or––from a different angle––this paper on the evolution of overconfidence.
If you want to hear more about rationalization, check out Jake's full paper, the recent discussion piece by Fiery Cushman, "Rationalization is Rational", and the replies to it (including Jake's!), and recent articles by Jason D'Cruz and Eric Mandelbaum.