Ideas and Ideologies

I – II – IIIIVVVIVIIVIIIIXXXIXIIXIIIXIVXVXVIXVIIXVIII
[Single-page view]

A lot of smart people have noticed what a powerful (and detrimental) role this particular brand of mental gymnastics can have in our ideological interactions – from Bill Clinton

The problem with any ideology is that it gives the answer before you look at the evidence. So you have to mold the evidence to get the answer that you’ve already decided you’ve got to have.

…to Arthur Conan Doyle, via his character Sherlock Holmes:

It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories instead of theories to suit facts.

Even George Orwell (perhaps unsurprisingly) weighed in on this phenomenon in his novel 1984, describing how people rid themselves of any thoughts or ideas that contradict their party’s ideology:

Crimestop means the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments […] and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.

To put it simply, then, there are basically two different approaches you might take when encountering new information. You can either examine all the facts dispassionately and then accept whichever truth they point to – even if it contradicts your preferred conclusion – or you can accept only those facts that are compatible with what you already believe, and rationalize the rest away so that you can maintain your already-held beliefs without having to change any of them. In the latter case, you aren’t actually undergoing an honest search for truth – you’re just searching for one very particular truth. You already have in mind the conclusion that you’re aiming for, and you’re determined to arrive at that conclusion even if it means ignoring or dismissing unwelcome facts.

The official term for this is “motivated reasoning.” Jonathan Haidt delves into some of the theory behind this behavior:

The social psychologist Tom Gilovich studies the cognitive mechanisms of strange beliefs. His simple formulation is that when we want to believe something, we ask ourselves, “Can I believe it?” Then (as [Deanna] Kuhn and [David] Perkins found), we search for supporting evidence, and if we find even a single piece of pseudo-evidence, we can stop thinking. We now have permission to believe. We have a justification, in case anyone asks.

In contrast, when we don’t want to believe something, we ask ourselves, “Must I believe it?” Then we search for contrary evidence, and if we find a single reason to doubt the claim, we can dismiss it.

You only need one key to unlock the handcuffs of must.

Psychologists now have file cabinets full of findings on “motivated reasoning,” showing the many tricks people use to reach the conclusions they want to reach. When subjects are told that an intelligence test gave them a low score, they choose to read articles criticizing (rather than supporting) the validity of IQ tests. When people read a (fictitious) scientific study that reports a link between caffeine consumption and breast cancer, women who are heavy coffee drinkers find more flaws in the study than do men and less caffeinated women. Pete Ditto, at the University of California at Irvine, asked subjects to lick a strip of paper to determine whether they have a serious enzyme deficiency. He found that people wait longer for the paper to change color (which it never does) when a color change is desirable than when it indicates a deficiency, and those who get the undesirable prognosis find more reasons why the test might not be accurate (for example, “My mouth was unusually dry today”).

The difference between a mind asking “Must I believe it?” versus “Can I believe it?” is so profound that it even influences visual perception. Subjects who thought that they’d get something good if a computer flashed up a letter rather than a number were more likely to see the ambiguous figure [below] as the letter B, rather than as the number 13.

If people can literally see what they want to see – given a bit of ambiguity – is it any wonder that scientific studies often fail to persuade the general public? Scientists are really good at finding flaws in studies that contradict their own views, but it sometimes happens that evidence accumulates across many studies to the point where scientists must change their minds. I’ve seen this happen in my colleagues (and myself) many times, and it’s part of the accountability system of science – you’d look foolish clinging to discredited theories. But for nonscientists, there is no such thing as a study you must believe. It’s always possible to question the methods, find an alternative interpretation of the data, or, if all else fails, question the honesty or ideology of the researchers.

And now that we all have access to search engines on our cell phones, we can call up a team of supportive scientists for almost any conclusion twenty-four hours a day. Whatever you want to believe about the causes of global warming or whether a fetus can feel pain, just Google your belief. You’ll find partisan websites summarizing and sometimes distorting relevant scientific studies. Science is a smorgasbord, and Google will guide you to the study that’s right for you.

[…]

People are quite good at challenging statements made by other people, but if it’s your belief, then it’s your possession – your child, almost – and you want to protect it, not challenge it and risk losing it.

Chris Mooney adds:

A large number of psychological studies have shown that people respond to scientific or technical evidence in ways that justify their preexisting beliefs. In a classic 1979 experiment (PDF), pro- and anti-death penalty advocates were exposed to descriptions of two fake scientific studies: one supporting and one undermining the notion that capital punishment deters violent crime and, in particular, murder. They were also shown detailed methodological critiques of the fake studies – and in a scientific sense, neither study was stronger than the other. Yet in each case, advocates more heavily criticized the study whose conclusions disagreed with their own, while describing the study that was more ideologically congenial as more “convincing.”

Since then, similar results have been found for how people respond to “evidence” about affirmative action, gun control, the accuracy of gay stereotypes, and much else. Even when study subjects are explicitly instructed to be unbiased and even-handed about the evidence, they often fail.

[…]

[In ideologically loaded cases such as these,] when we think we’re reasoning, we may instead be rationalizing. Or to use an analogy offered by University of Virginia psychologist Jonathan Haidt: We may think we’re being scientists, but we’re actually being lawyers (PDF). Our “reasoning” is a means to a predetermined end – winning our “case” – and is shot through with biases. They include “confirmation bias,” in which we give greater heed to evidence and arguments that bolster our beliefs, and “disconfirmation bias,” in which we expend disproportionate energy trying to debunk or refute views and arguments that we find uncongenial.

That’s a lot of jargon, but we all understand these mechanisms when it comes to interpersonal relationships. If I don’t want to believe that my spouse is being unfaithful, or that my child is a bully, I can go to great lengths to explain away behavior that seems obvious to everybody else – everybody who isn’t too emotionally invested to accept it, anyway. That’s not to suggest that we aren’t also motivated to perceive the world accurately – we are. Or that we never change our minds – we do. It’s just that we have other important goals besides accuracy – including identity affirmation and protecting one’s sense of self – and often those make us highly resistant to changing our beliefs when the facts say we should.

As Haidt points out (citing the research of Philip Tetlock), it actually is possible to overcome these biases under certain specific circumstances, in which you expect to be held accountable for the accuracy of your beliefs in a very particular way – but as he also mentions, these circumstances hardly ever arise in the real world:

Tetlock found two very different kinds of careful reasoning. Exploratory thought is an “evenhanded consideration of alternative points of view.” Confirmatory thought is “a one-sided attempt to rationalize a particular point of view.” Accountability increases exploratory thought only when three conditions apply: (1) decision makers learn before forming any opinion that they will be accountable to an audience, (2) the audience’s views are unknown, and (3) they believe the audience is well informed and interested in accuracy.

When all three conditions apply, people do their darnedest to figure out the truth, because that’s what the audience wants to hear. But the rest of the time – which is almost all of the time – accountability pressures simply increase confirmatory thought. People are trying harder to look right than to be right.

It’s also worth noting that the greater your commitment is to a certain belief (and in high-stakes realms like politics, religion, and morality, people’s level of commitment tends to be extreme), the harder it is to give up your attachment to it. The more central an idea is to your thinking, the less likely it is that you’ll be willing to consider replacing it, even when the evidence against it becomes overwhelming. It may be no big deal to occasionally make slight adjustments to one or two minor beliefs on the fringes of your worldview, but if a belief underpins the entire foundation of your understanding, it’s simply unthinkable to ever change it – because that would mean tearing down the whole superstructure (perhaps one that you’ve spent years constructing) and starting all over again from ground zero. As Tavris and Aronson write:

America is a mistake-phobic culture, one that links mistakes with incompetence and stupidity. So even when people are aware of having made a mistake, they are often reluctant to admit it, even to themselves, because they take it as evidence that they are a blithering idiot.

[…]

Most Americans know they are supposed to say “we learn from our mistakes,” but deep down, they don’t believe it for a minute. They think that mistakes mean you are stupid.

[…]

One lamentable consequence of the belief that mistakes equal stupidity is that when people do make a mistake, they don’t learn from it. They throw good money after bad.

They provide a particularly striking historical example:

Half a century ago, a young social psychologist named Leon Festinger and two associates infiltrated a group of people who believed the world would end on December 21. They wanted to know what would happen to the group when (they hoped!) the prophecy failed. The group’s leader, whom the researchers called Marian Keech, promised that the faithful would be picked up by a flying saucer and elevated to safety at midnight on December 20. Many of her followers quit their jobs, gave away their homes, and dispersed their savings, waiting for the end. Who needs money in outer space? Others waited in fear or resignation in their homes. (Mrs. Keech’s own husband, a nonbeliever, went to bed early and slept soundly through the night as his wife and her followers prayed in the living room.) Festinger made his own prediction: The believers who had not made a strong commitment to the prophecy – who awaited the end of the world by themselves at home, hoping they weren’t going to die at midnight – would quietly lose their faith in Mrs. Keech. But those who had given away their possessions and were waiting with the others for the spaceship would increase their belief in her mystical abilities. In fact, they would now do everything they could to get others to join them.

At midnight, with no sign of a spaceship in the yard, the group felt a little nervous. By 2 a.m., they were getting seriously worried. At 4:45 a.m., Mrs. Keech had a new vision: The world had been spared, she said, because of the impressive faith of her little band. “And mighty is the word of God,” she told her followers, “and by his word have ye been saved – for from the mouth of death have ye been delivered and at no time has there been such a force loosed upon the Earth. Not since the beginning of time upon this Earth has there been such a force of Good and light as now floods this room.”

The group’s mood shifted from despair to exhilaration. Many of the group’s members, who had not felt the need to proselytize before December 21, began calling the press to report the miracle, and soon they were out on the streets, buttonholing passersby, trying to convert them. Mrs. Keech’s prediction had failed, but not Leon Festinger’s.

You might be tempted to laugh at the gullibility of these seemingly ridiculous cultists, but we all share the same psychological propensity to stack the deck in favor of our preferred conclusions when we’re heavily invested in them; even the most intelligent and highly-trained professionals in their fields are susceptible to this. In fact, even when the stakes are as high as they could possibly be, even when the lives of millions of people hang in the balance, it’s possible to fall prey to this tendency to “put our thumbs on the scale as we weigh the evidence,” as Simler puts it – to take the principle of “the benefit of the doubt” to its logical extreme in favor of our preferred conclusions. Gary Klein cites the example of the pivotal World War II battle of Midway:

[The Japanese] had reason for their overconfidence. They had smashed the Americans and British throughout the Pacific – at Pearl Harbor, in the Philippines, and in Southeast Asia. Now they prepared to use an attack on Midway Island to wipe out the remaining few aircraft carriers the Americans had left in the Pacific. The Japanese brought their primary strike force, the force that had struck at Pearl Harbor, with four of their finest aircraft carriers.

The battle didn’t go as planned. The Americans had broken enough of the Japanese code to know about the Japanese attack and got their own aircraft carriers into position before the Japanese arrived. The ambush worked. In a five-minute period, the United States sank three of the Japanese aircraft carriers. By the end of the day, it sank the fourth one as well. At the beginning of June 4, the Japanese Navy ruled the Pacific. By that evening, its domination was over, and Japan spent the rest of the war defending against an inevitable defeat.

What interests us here is the war game the Japanese conducted May 1–5 to prepare for Midway. The top naval leaders gathered to play out the plan and see if there were any weaknesses. Yamamoto himself was present. However, the Japanese brushed aside any hint of a weakness. At one point the officer playing the role of the American commander sent his make-believe forces to Midway ahead of the battle to spring an ambush not unlike what actually happened. The admiral refereeing the war game refused to allow it. He argued that it was very unlikely that the Americans would make such an aggressive move. The officer playing the American commander tearfully protested, not only because he wanted to do well in the war game, but also, and more importantly, because he was afraid his colleagues were taking the Americans too lightly. His protests were overruled. With Yamamoto looking on approvingly, the Japanese played the war game to the end and concluded that the plan didn’t have any flaws.

How could the Japanese leaders have been so willfully blind to the facts that were staring them right in the face, especially when the stakes were so high? The simple answer is that, precisely because the stakes were so high, the Japanese leaders became too psychologically committed to the stances they’d already taken – to the point that it was less psychologically painful to continue being wrong, and to just have everyone think they were competent (including themselves), than it was to be right by admitting their error and going back to the drawing board. If the stakes had been significantly lower – for instance, if they’d just been playing a casual game of Battleship over drinks with friends – it would have been no big deal to accept some good outside advice on improving their strategy. But because they’d put so much work into their battle plans, and because they were staking so much of their prestige as military commanders on the success of these plans, it was unthinkable to just scrap them and start all over from square one. Tavris and Aronson give another World War II-related analogy illustrating this mindset:

In that splendid film The Bridge on the River Kwai, Alec Guinness and his soldiers, prisoners of the Japanese in World War II, build a railway bridge that will aid the enemy’s war effort. Guinness agrees to this demand by his captors as a way of building unity and restoring morale among his men, but once he builds it, it becomes his – a source of pride and satisfaction. When, at the end of the film, Guinness finds the wires revealing that the bridge has been mined and realizes that Allied commandoes are planning to blow it up, his first reaction is, in effect: “You can’t! It’s my bridge. How dare you destroy it!” To the horror of the watching commandoes, he tries to cut the wires to protect the bridge. Only at the very last moment does Guinness cry, “What have I done?,” realizing that he was about to sabotage his own side’s goal of victory to preserve his magnificent creation.

We’ve all had moments like this, where we become so emotionally attached to our own ideas – so invested in winning the argument and preserving our beliefs – that we lose sight of our more important goal, which should be making sure we’re on the right side of the argument and holding the right beliefs in the first place. As Sister Y puts it:

A lot of us get stuck in traps. We become aware of a powerful insight (atheism, feminism, conspiracy theories) and begin to think it explains all of reality. We commit to our hard-won but limited set of insights until they calcify, protecting us from the trauma (and the pleasure) of further insights.

Sure, we can admit when we’re wrong about small, trivial things – no problem – but when it comes to the big things, we don’t want to let go of anything we’ve worked so hard on and made such an integral part of our identity. Not only is it deeply demoralizing to have to go back to square one; it’s embarrassing. Admitting you were wrong about something means losing face in a big way – especially if it’s something you were really vocal and adamant about previously.

Maybe if we had a different approach toward ideas and beliefs – one in which people didn’t have to feel so self-conscious about being wrong and could freely explore different possibilities in a genuinely open-ended search for truth – we wouldn’t keep running into these problems. But the whole “taking sides” dynamic doesn’t permit such an approach. Not only does it compel you to do all these mental gymnastics massaging the facts to fit into your side’s narrative; it also forces you into a mindset that is, by definition, inherently adversarial and hostile toward any outside challenges. You can’t have a “side,” after all, unless there’s an opposing side – and this means that beliefs and ideas aren’t just a matter of freely exploring different possibilities in an unselfconscious search for truth, they’re a matter of winning and losing. If you happen to be wrong about something, that doesn’t just mean you can correct course and continue onward, feeling grateful for the opportunity to have upgraded to a more accurate set of beliefs; it means you’ve embarrassed yourself and lost face.

There’s a good bit of psychological research to suggest that when you openly challenge a person’s beliefs in a direct confrontation, it doesn’t necessarily make them more open to opposing ideas (as it might if they learned about those ideas in a less adversarial context); on the contrary, what can often happen is that direct confrontation causes them to become more defensive, digging in their heels and entrenching themselves even further in their positions. Again, this is probably something you’ve experienced for yourself – what starts as a friendly disagreement (Person A: “I don’t think that’s really true”) gradually begins to escalate (Person B: “Really? It seems pretty clear to me that it is true”); positions begin to harden (Person A: “Are you kidding? It’s obviously not true; you’d have to be a moron not to see that”), until finally you reach a point where both participants, having entered the conversation with a fairly modest level of confidence in their views, are now swearing that their side is right with absolute 100% certainty. This isn’t so much that they really are 100% certain in their views (if you asked them to bet their life savings on it, for instance, they’d probably start backpedaling pretty quickly unless they’d gotten themselves so steamed up that they were beyond all reason); it’s more that their avowed “certainty” is serving as a proxy for something else, like how important the belief is to them, or how much they want to be perceived as being committed to that belief. A lot of times, it’s just a way of making their argument seem more credible; if someone really is that confident in their beliefs, then those beliefs must be true, right?

Obviously, we know that that’s not the case. There are millions of Christians who will swear with 100% certainty that Jesus has visited them personally and revealed to them that Christianity is the one and only true religion; and there are millions of Muslims and Hindus and other believers who will swear the same thing with 100% certainty about their own deities; but they can’t all be right. Similarly, there are millions of liberals who claim to be 100% certain that their preferred policies are superior, while millions of conservatives claim to be 100% certain that their preferred policies are superior. Again, just because someone claims a high degree of certainty doesn’t prove that their ideas are more credible; all it proves is that people are capable of convincing themselves that they’re certain of things they don’t actually have any way of being certain of. (As Michel de Montaigne said, “Nothing is so firmly believed as that which we least know.”) Nobody wants to admit this out loud, though – especially when they’re talking to someone outside of their religion or political party – because they feel that admitting to anything less than absolute certainty in their beliefs would undermine the perceived credibility of those beliefs. If you’re trying to win an argument, the reasoning goes, then hedging your positions and conceding that there are a lot of unknown grey areas defeats the purpose. You need to know for a fact that your side is right; anything less is self-sabotage.

Continued on next page →