Ideas and Ideologies (cont.)

IIIIIIIVVVIVIIVIIIIXXXIXIIXIIIXIV – XV – XVIXVIIXVIII
[Single-page view]

Being aware of all the hallmarks of good discourse can make it much easier to recognize and avoid bad discourse – one of the first steps toward better thinking. If you can identify which sources of information are genuinely pursuing the truth, and which ones just seem more interested in posturing and arguing for their own sake, you can more readily filter out the “junk food” – i.e. those sources of ideological content that, while satisfying to consume, don’t actually teach you anything new or bring you any transformative insights. The most obvious offenders in this category, of course, are those that openly proclaim their lack of intellectual curiosity – the ones that pride themselves on avoiding deep deliberation because they consider it a distraction from fight on the ground. Dagny discusses her own experience in such circles:

Anti-intellectualism is a pill I swallowed, but it got caught in my throat, and that would eventually save me. It comes in a few forms. Activists in these circles often express disdain for theory because they take theoretical issues to be idle sudoku puzzles far removed from the real issues on the ground. This is what led one friend of mine to say, in anger and disbelief, “People’s lives aren’t some theoretical issue!” That same person also declared allegiance to a large number of theories about people’s lives, which reveals something important. Almost everything we do depends on one theoretical belief or another, which range from simple to complex and from implicit to explicit. A theoretical issue is just a general or fundamental question about something that we find important enough to think about. Theoretical issues include ethical issues, issues of political philosophy, and issues about the ontological status of gender, race, and disability. Ultimately, it’s hard to draw a clear line between theorizing and thinking in general. Disdain for thinking is ludicrous, and no one would ever express it if they knew that’s what they were doing.

Specifically on the radical leftist side of things, one problem created by this anti-theoretical bent is a lot of rhetoric and bluster, a lot of passionate railing against the world or some aspect of it, without a clear, detailed, concrete alternative. There was a common excuse for this. As an activist friend wrote in an email, “The present organization of society fatally impairs our ability to imagine meaningful alternatives. As such, constructive proposals will simply end up reproducing present relations.” This claim is couched in theoretical language, but it is a rationale for not theorizing about political alternatives. For a long time I accepted this rationale. Then I realized that mere opposition to the status quo wasn’t enough to distinguish us from nihilists. In the software industry, a hyped-up piece of software that never actually gets released is called “vapourware.” We should be wary of political vapourware. If somebody’s alternative to the status quo is nothing, or at least nothing very specific, then what are they even talking about? They are hawking political vapourware, giving a “sales pitch” for something that doesn’t even exist.

These kinds of attempts to win arguments through sheer conviction rather than actual content – to use a bunch of absolutist rhetoric as a substitute for putting forth real substance – are another big red flag to watch out for. In fact, it’s worth paying attention to people’s tone just in general; most of the time, the more sanctimonious a writer or commentator’s tone is – the more they act like they know it all and start all their arguments with “Let me educate you on this” – the more you should take what they say with a grain of salt, because it’s a good sign that they’re more interested in portraying themselves as an authority than they are in actually finding out what the reality of an issue is. It’s not necessarily that being sanctimonious makes people more wrong, mind you; after all, there are sanctimonious people on both sides of every issue, and they can’t all be wrong. It’s more that the people who tend to make genuinely thoughtful judgments – who are wary of oversimplification and try to avoid leaping to premature conclusions – are more likely to be humble and judicious when presenting their ideas. They’re more likely to explain why their positions are correct in a thorough, methodical way, knowing that if their ideas are strong enough they’ll be able to stand up on their own – whereas the people whose opinions are less practically grounded will be more likely to use grandstanding and embellishment to make their case seem more convincing. If you’ve spent any time on the internet or watched any cable news, you’ve no doubt seen plenty of this; browsing through old posts online, there’s no end to the self-proclaimed experts declaring with absolute certainty that hyperinflation will destroy the economy within the year, or that Hillary Clinton is a lock to win the presidency, or that Google is a fad that’s about to collapse (this site is a good example of the kind of tone you typically see). The recurring theme with all of them is that they regard their opinions as completely obvious, and they can’t even imagine how anyone could assert the opposing view with a straight face; anyone who does so, in their estimation, must be either dishonest or braindead. Their level of confidence in these assertions is absolute – and of course, all too often it has no correlation whatsoever with how accurate the assertions actually turn out to be in the end.

It might be easy to laugh at these people’s certitude after they’re proven monumentally wrong; but being able to recognize it before the fact is trickier. You have to watch out for the kinds of persuasive techniques they use, because if you’re not careful they can creep into your judgment and distort your own level of confidence in your beliefs without you realizing it. If the only people you listen to are the ones who are constantly asserting with absolute confidence that your side is the correct one, you’re likely to start developing a false sense of certainty yourself. So as commenter stupidestpuppy advises, you should be mindful of your own mental state when you come across ideas that feel particularly satisfying to indulge:

I feel like a good way to approach news and politics is to be extra skeptical about anything that makes you happy, angry, or smug. Because you want to have those emotions you’re more willing to accept arguments that are illogical or not backed up by facts. There are too many people who accept things to be true because they want them to be true.

It’s easy to fall into the trap of becoming unduly confident in your own beliefs just because they feel so right that they simply have to be true. If a particular idea has an incredibly cohesive internal logic to it, and seems to beautifully integrate all the pieces of the puzzle into one simple explanation – if it’s the kind of idea that makes you think “My God, this explains everything” in a sudden rush of clarity – the sheer elegance of its explanatory power can be so powerful that it can turn things like, say, actual real-world evidence and factual substantiation into mere afterthoughts. Our brains aren’t always very good at differentiating between something that seems “compelling” in an emotional or intuitive sense (in the same way that a deeply resonant movie or novel could be called a compelling piece of art) and something that seems “compelling” in a hard empirical sense (in the same way that fingerprints or phone recordings could be called compelling pieces of evidence in a court case); we tend to just lump it all together under the same general-purpose label of “compelling,” and so we end up thinking that things are more likely to be true simply because of their narrative resonance or whatever. But of course, the fact that an explanation is incredibly compelling on an emotional or personal level doesn’t constitute good evidence that it’s actually true in an empirical sense at all.

H.L. Mencken famously put it this way:

There is always a well-known solution to every human problem – neat, plausible, and wrong.

And in theory, this seems easy to accept. But in practice, it’s harder to adhere to. If you’ve got an explanation that you really like, your subconscious impulse will be to resist any counterargument that might force you to relinquish it. Because ideas that are comfortable and satisfying are easier to accept than ideas that are uncomfortable and inconvenient, you’re more likely to treat them as true, whether they actually are or not. It’s the whole motivated reasoning thing again.

But motivated reasoning and false certainty are dead ends. To borrow an example from Joseph Romm, think about what you would do in a situation where the stakes really were a matter of life and death – like if you thought that you (or your child) might have contracted a life-threatening disease. Would you try to find the one doctor out of 100 who was willing to tell you there was no problem and you had nothing to worry about? Or would you try to find the doctor who was the most accurate, and who always gave correct diagnoses even if they were upsetting to hear? If the truth actually matters – if there are real consequences at stake – then you can’t just insist that your preferred conclusions are the ones you’re going to believe. You have to be willing to consider the ugly, unsatisfying possibilities as well – because those might be the ones that are actually true. You have to insist that your map actually match the territory, because otherwise you may find yourself somewhere you don’t want to be.

The real key to getting at the truth is to resist the urge to think like a lawyer – only looking for good ideas on your own side and only looking for bad ideas on the opposing side – and instead to think like a scientist – looking for both good ideas that you can use and bad ideas that you can shoot down on both sides. If the ideas you favor really are the strongest ones available, then they’ll be able to withstand even the harshest scrutiny – but the only way to find out if that’s the case is to actually run them through that gauntlet. As McRaney notes, it’s this approach which has always produced the best results throughout history:

Your natural tendency is to start from a conclusion and work backward to confirm your assumptions, but the scientific method drives down the wrong side of the road and tries to disconfirm your assumptions. A couple of centuries back people began to catch on to the fact that looking for disconfirming evidence was a better way to conduct research than proceeding from common belief. They saw that eliminating suspicions caused the outline of the truth to emerge. Once your forefathers and foremothers realized that this approach generated results, in a few generations your species went from burning witches and drinking mercury to mapping the human genome and playing golf on the moon.

Actively trying to disprove your own beliefs – particularly ones that you feel strongly about – can feel wrong and unnatural, like you’re going against everything you believe in (because after all, that is technically what you’re doing). But if you’re able to scrutinize the arguments from your own side and look for bad ideas to shoot down just as critically as you would if those arguments were coming from the opposing side, you can often find flaws in your ideology that you might never have noticed otherwise – and you can make corrections and improvements that you might otherwise have overlooked. Taking a tough-love approach to your own worldview is a way of strengthening it, not weakening it. If you’re your own harshest critic, then you don’t have to be told you’re wrong. And conversely, if you’re able to view the opposing side’s arguments charitably and with an open mind, you can often discover new ideas that can be integrated into your worldview to strengthen it further still. It doesn’t necessarily mean you have to fully buy into a worldview you disagree with – you can “try on” different worldviews and explore their implications without converting fully to the beliefs you’re exploring. But this ability to try on different worldviews and dispassionately follow them to their logical conclusions – even ones that you’d vehemently object to in your usual mode of judgment – is an invaluable skill, which can allow you to see things and make connections no one else would notice. As Aristotle is claimed to have said: “It is the mark of an educated mind to be able to entertain a thought without accepting it.”

(Nerst also provides a good illustration of the point in this post; the more worldviews you can add to your conceptual toolkit – keeping them handy to deploy whenever they might be useful – the better.)

Here’s a good method for accomplishing this that you can try for yourself: The next time you’ve got an idea you’re really infatuated with, don’t just consider whether it’s true or false – try outright assuming that it’s false and see where that leads you. Like, if there were a genie who could grant you the power to know the whole truth of the universe, and it turned out (to your great shock) that your idea was unambiguously false, and your opponents’ ideas were unambiguously true, then how would you explain that? What possible explanations could you come up with that might be plausible?

This technique, of pre-emptively taking it for granted that you’re utterly wrong and then trying to figure out explanations for why, can be a much more effective way of finding the cracks in your ideology than the more traditional approach, as Tetlock and Gardner point out:

Researchers have found that merely asking people to assume their initial judgment is wrong, to seriously consider why that might be, and then make another judgment, produces a second estimate which, when combined with the first, improves accuracy almost as much as getting a second estimate from another person.

And it can be effectively applied to all kinds of situations, from boardrooms to marriages, as Alexander adds:

There’s a rationalist tradition […] that before you get married, you ask all your friends to imagine that the marriage failed and tell you why. I guess if you just asked people “Will our marriage fail?” everyone would say no, either out of optimism or social desirability bias. If you ask “Assume our marriage failed and tell us why”, you’ll actually hear people’s concerns. I think this is the same principle.

Gary Klein explains the benefits of this technique for group projects (and even gives it a catchy name):

Projects fail at a spectacular rate. One reason is that too many people are reluctant to speak up about their reservations during the all-important planning phase. By making it safe for dissenters who are knowledgeable about the undertaking and worried about its weaknesses to speak up, you can improve a project’s chances of success.

Research conducted in 1989 by Deborah J. Mitchell, of the Wharton School; Jay Russo, of Cornell; and Nancy Pennington, of the University of Colorado, found that prospective hindsight – imagining that an event has already occurred – increases the ability to correctly identify reasons for future outcomes by 30%. We have used prospective hindsight to devise a method called a premortem, which helps project teams identify risks at the outset.

A premortem is the hypothetical opposite of a postmortem. A postmortem in a medical setting allows health professionals and the family to learn what caused a patient’s death. Everyone benefits except, of course, the patient. A premortem in a business setting comes at the beginning of a project rather than the end, so that the project can be improved rather than autopsied. Unlike a typical critiquing session, in which project team members are asked what might go wrong, the premortem operates on the assumption that the “patient” has died, and so asks what did go wrong. The team members’ task is to generate plausible reasons for the project’s failure.

A typical premortem begins after the team has been briefed on the plan. The leader starts the exercise by informing everyone that the project has failed spectacularly. Over the next few minutes those in the room independently write down every reason they can think of for the failure – especially the kinds of things they ordinarily wouldn’t mention as potential problems, for fear of being impolitic.

[…]

Next the leader asks each team member, starting with the project manager, to read one reason from his or her list; everyone states a different reason until all have been recorded. After the session is over, the project manager reviews the list, looking for ways to strengthen the plan.

[…]

Although many project teams engage in prelaunch risk analysis, the premortem’s prospective hindsight approach offers benefits that other methods don’t. Indeed, the premortem doesn’t just help teams to identify potential problems early on. It also reduces the kind of damn-the-torpedoes attitude often assumed by people who are overinvested in a project. Moreover, in describing weaknesses that no one else has mentioned, team members feel valued for their intelligence and experience, and others learn from them. The exercise also sensitizes the team to pick up early signs of trouble once the project gets under way. In the end, a premortem may be the best way to circumvent any need for a painful postmortem.

Ultimately, being able to find the faults in your own ideas just comes down to being able to put yourself in the right mindset. If you’re treating an idea like something you need to promote and protect, allowing yourself to admit its flaws won’t come as easily – but if you remove this mental need to protect your idea by taking it for granted that it’s already failed, then you can free up your mind from these subconscious mental constraints that you’ve placed on it, and accordingly uncover mistakes that you might not have been allowing yourself to see before.

There are other good tricks to get yourself into a more open frame of mind as well. For instance, if you notice yourself feeling less receptive to opposing ideas than you’d like to be – like, say, if you’re a moderate liberal who wants to better understand Hayek’s arguments against government intervention in the market, but finds it hard to overcome your reflexive knee-jerk revulsion toward anything coming from the conservative side – it can be helpful to imagine that you’re reading the material to someone who’s even further down the ideological scale from you than you are from the material itself (i.e. an ultra-left communist or something). If you can imagine that you’re trying to find arguments to persuade that hypothetical extremist to adopt a more moderate position, you may find yourself feeling more receptive to conservative ideas (at least the good ones) in general. Similarly, if you’re (say) a conservative trying to better understand liberal arguments for feminism, you might imagine what kind of mindset you’d take toward someone who was significantly more conservative than you regarding gender roles – who believed that women should be completely subservient to men in every way, for instance. In so doing, you may find yourself feeling more open than usual to good feminist arguments that could be useful in a hypothetical debate with such a person.

Another technique, recommended by Luke Muehlhauser, is to override your feeling of certainty that you already have all the answers – and instead get curious about the things you don’t know – by importing that feeling of curious uncertainty from other contexts where you’ve already experienced it:

Step 1: Feel that you don’t already know the answer.

If you have beliefs about the matter already, push the “reset” button and erase that part of your map. You must feel that you don’t already know the answer.

Exercise 1.1: Import the feeling of uncertainty.

  1. Think of a question you clearly don’t know the answer to. When will AI be created? Is my current diet limiting my cognitive abilities? Is it harder to become the Prime Minister of Britain or the President of France?
  2. Close your eyes and pay attention to how that blank spot on your map feels. (To me, it feels like I can see a silhouette of someone in the darkness ahead, but I wouldn’t take bets on who it is, and I expect to be surprised by their identity when I get close enough to see them.)
  3. Hang on to that feeling or image of uncertainty and think about the thing you’re trying to get curious about. If your old certainty creeps back, switch to thinking about who composed the Voynich manuscript again, then import that feeling of uncertainty into the thing you’re trying to get curious about, again.

Exercise 1.2: Consider all the things you’ve been confident but wrong about.

  1. Think of things you once believed but were wrong about. The more similar those beliefs are to the beliefs you’re now considering, the better.
  2. Meditate on the frequency of your errors, and on the depths of your biases (if you know enough cognitive psychology).

Step 2: Want to know the answer.

Now, you must want to fill in this blank part of your map.

You mustn’t wish it to remain blank due to apathy or fear. Don’t avoid getting the answer because you might learn you should eat less pizza and more half-sticks of butter. Curiosity seeks to annihilate itself.

You also mustn’t let your desire that your inquiry have a certain answer block you from discovering how the world actually is. You must want your map to resemble the territory, whatever the territory looks like. This enables you to change things more effectively than if you falsely believed that the world was already the way you want it to be.

Exercise 2.1: Visualize the consequences of being wrong.

  1. Generate hypotheses about the ways the world may be. Maybe you should eat less gluten and more vegetables? Maybe a high-protein diet plus some nootropics would boost your IQ 5 points? Maybe your diet is fairly optimal for cognitive function already?
  2. Next, visualize the consequences of being wrong, including the consequences of remaining ignorant. Visualize the consequences of performing 10 IQ points below your potential because you were too lazy to investigate, or because you were strongly motivated to justify your preference for a particular theory of nutrition. Visualize the consequences of screwing up your neurology by taking nootropics you feel excited about but that often cause harm to people with cognitive architectures similar to your own.

Exercise 2.2: Make plans for different worlds.

  1. Generate hypotheses about the way the world could be – different worlds you might be living in. Maybe you live in a world where you’d improve your cognitive function by taking nootropics, or maybe you live in a world where the nootropics would harm you.
  2. Make plans for what you’ll do if you happen to live in World #1, what you’ll do if you happen to live in World #2, etc. (For unpleasant possible worlds, this also gives you an opportunity to leave a line of retreat for yourself.)
  3. Notice that these plans are different. This should produce in you some curiosity about which world you actually live in, so that you can make plans appropriate for the world you do live in rather than for one of the worlds you don’t live in.

Exercise 2.3: Recite the Litany of Tarski.

The Litany of Tarski can be adapted to any question. If you’re considering whether the sky is blue, the Litany of Tarski is:

If the sky is blue
I desire to believe the sky is blue.
If the sky is not blue,
I desire not to believe the sky is blue.

Exercise 2.4: Recite the Litany of Gendlin.

The Litany of Gendlin reminds us:

What is true is already so.
Owning up to it doesn’t make it worse.
Not being open about it doesn’t make it go away.
And because it’s true, it is what is there to be interacted with.
Anything untrue isn’t there to be lived.
People can stand what is true,
for they are already enduring it.

Step 3: Sprint headlong into reality.

If you’ve made yourself uncertain and then curious, you’re now in a position to use argument, empiricism, and scholarship to sprint headlong into reality.

Again, it all comes down to your frame of mind. Embracing the prospect of your own wrongness is hard. You have to train your mind and your emotions the same way a martial arts master trains their body. But as Muehlhauser continues:

[Someone who was really interested in truth] would not flinch away from experiences that might destroy their beliefs. They would train their emotions to fit the facts.

Like the Litany of Tarski says, if there’s something you’re wrong about, you should want to know that. If there’s some flaw in your ideology or some area where your beliefs can be improved, you should want to know where those flaws are and how those improvements can be made, so that you can upgrade your worldview to one that’s more accurate. If you’re on the losing side of an argument, then you should want to lose your arguments – because the alternative is to go on believing something that isn’t true, and to set yourself up for embarrassment or disaster later on. Again, if your beliefs are true, then you can subject them to all the harshest scrutiny imaginable, and they will still emerge even stronger than before. Truth has nothing to fear from honest inquiry. But if it’s actually your opponents’ beliefs that are true, then refusing to allow yourself to discover that fact is nothing but an act of pure self-sabotage. It might feel like acknowledging that you’ve been proven wrong is doing your opponents a favor – but by simply admitting it, allowing yourself to embrace a new truth, and growing stronger in your understanding of the world, you’re actually doing yourself the favor. As Pinker writes:

“Sunlight is the best disinfectant,” according to Justice Louis Brandeis’s famous case for freedom of thought and expression. If an idea really is false, only by examining it openly can we determine that it is false. At that point we will be in a better position to convince others that it is false than if we had let it fester in private, since our very avoidance of the issue serves as a tacit acknowledgment that it may be true. And if an idea is true, we had better accommodate our moral sensibilities to it, since no good can come from sanctifying a delusion. This might even be easier than the ideaphobes fear. The moral order did not collapse when the earth was shown not to be at the center of the solar system, and so it will survive other revisions of our understanding of how the world works.

Two thousand years ago, Marcus Aurelius wrote about the importance of being able “to hear unwelcome truths.” This is what he was talking about. Occasionally you’ll encounter an idea that makes you uncomfortable to think about – you won’t be able to say where it’s wrong, exactly, but if you were to accept it as true, it’s not clear how you’d be able to preserve the conclusions you want to maintain – so your natural inclination will be to just stop focusing on it altogether, and spare yourself the mental discomfort. But this mental discomfort – or cognitive dissonance, as it’s called – is a sign that you should be paying more attention to the idea at hand, because it suggests that your current beliefs might not actually be as airtight as you thought. After all, in a situation where you knew your case was completely irrefutable – like if you were arguing with someone over whether the moon was made of cheese – you wouldn’t feel any mental discomfort or dissonance, because it’d be obvious that their ideas were a nothing but a bunch of silly nonsense that you could just laugh off. The fact that you are feeling some dissonance means that you’re not in one of those situations – there might actually be some kernel of truth buried in the midst of those uncomfortable ideas – and if there is, then you need to have the fortitude to be able to dig it out – because ultimately, the difference between what’s true and what’s not true matters, and it’s important to get it right.

This ability to be “good at thinking of uncomfortable thoughts,” as Yudkowsky puts it – to notice when you’re feeling that cognitive dissonance and be willing take it head on rather than suppressing it or flinching away from it – is what enables you to learn and become stronger in your beliefs. Refusing to recognize your own internal experiences of confusion and dissonance simply means that you’re denying yourself a critical mental mechanism that can subconsciously tip you off when you acquire a belief that’s false. If you can learn the recognize the various gradations of doubt and uncertainty associated with each of your beliefs – if you can cultivate the ability to notice that “quiet strain in the back of your mind” when your explanations for your beliefs start to “feel a little forced” (to quote Yudkowsky again) – then you can exploit that to your advantage by focusing in on it the way a detective focuses in on a clue. You can switch your mindset from one of false certainty to one of genuine curiosity. You can give yourself permission to explore the topic thoroughly enough to find the correct answers – rather than just pretending to have them all figured out already – and in so doing, you can actually resolve your feelings of confusion and uncertainty rather than merely suppressing them.

That sense of curiosity is the key. As Yudkowsky writes, you should try to approach every question with a mindset of wanting to explore counterarguments, not out of a grudging sense that it’s your intellectual duty to do so, but because you’re genuinely curious to find out where the truth lies:

Consider what happens to you, on a psychological level, if you begin by saying: “It is my duty to criticize my own beliefs.” Roger Zelazny once distinguished between “wanting to be an author” versus “wanting to write.” Mark Twain said: “A classic is something that everyone wants to have read and no one wants to read.” Criticizing yourself from a sense of duty leaves you wanting to have investigated, so that you’ll be able to say afterward that your faith is not blind. This is not the same as wanting to investigate.

This can lead to motivated stopping of your investigation. You consider an objection, then a counterargument to that objection, then you stop there. You repeat this with several objections, until you feel that you have done your duty to investigate, and then you stop there. You have achieved your underlying psychological objective: to get rid of the cognitive dissonance that would result from thinking of yourself as a rationalist and yet knowing that you had not tried to criticize your belief. You might call it purchase of rationalist satisfaction – trying to create a “warm glow” of discharged duty.

Afterward, your stated probability level will be high enough to justify your keeping the plans and beliefs you started with, but not so high as to evoke incredulity from yourself or other rationalists.

When you’re really curious, you’ll gravitate to inquiries that seem most promising of producing shifts in belief, or inquiries that are least like the ones you’ve tried before. Afterward, your probability distribution likely should not look like it did when you started out – shifts should have occurred, whether up or down; and either direction is equally fine to you, if you’re genuinely curious.

Contrast this to the subconscious motive of keeping your inquiry on familiar ground, so that you can get your investigation over with quickly, so that you can have investigated, and restore the familiar balance on which your familiar old plans and beliefs are based.

[…]

In the microprocess of inquiry, your belief should always be evenly poised to shift in either direction. Not every point may suffice to blow the issue wide open – to shift belief from 70% to 30% probability – but if your current belief is 70%, you should be as ready to drop it to 69% as raise it to 71%. You should not think that you know which direction it will go in (on average), because by the laws of probability theory, if you know your destination, you are already there. If you can investigate honestly, so that each new point really does have equal potential to shift belief upward or downward, this may help to keep you interested or even curious about the microprocess of inquiry.

[…]

There just isn’t any good substitute for genuine curiosity. A burning itch to know is higher than a solemn vow to pursue truth.

Unfortunately, people don’t generally like to approach questions in this way – because admitting that they could learn something new or improve their beliefs in some way would mean admitting that there might be gaps in their knowledge in the first place. Admitting to ignorance, no matter how partial or minor, often feels like admitting to stupidity. But it doesn’t have to feel this way. After all, as Simler points out, someone who hasn’t invested any of their ego in a particular belief “experiences no anguish in letting go of [that belief if it turns out to be false] and adopting a better one, even its opposite. In fact, it’s a pleasure. If I believe that my daughter’s soccer game starts at 6pm, but my neighbor informs me that it’s 5pm, I won’t begrudge his correction – I’ll be downright grateful.” In that kind of situation, there’s no sense that anyone’s self-image should feel threatened by being wrong – it’s just a matter of being mistaken – and so correcting the false belief is no big deal at all. The only reason why other beliefs feel different is because we’ve invested our ego in them, so we feel like we’re losing face by getting them wrong. But like I said, there’s no reason why this has to be the case – because being wrong really isn’t the same thing as being stupid, as Tavris and Aronson explain:

It’s another form of [Shimon] Peres’s [dictum that “when a friend makes a mistake, the friend remains a friend, and the mistake remains a mistake”]: Articulate the cognitions and keep them separate. “When I, a decent, smart person, make a mistake, I remain a decent, smart person and the mistake remains a mistake. Now, how do I remedy what I did?”

So embedded is the link between mistakes and stupidity in American culture that it can be shocking to learn that not all cultures share the same phobia about them. In the 1970s, psychologists Harold Stevenson and James Stigler became interested in the math gap in performance between Asian and American schoolchildren: By the fifth grade, the lowest-scoring Japanese classroom was outperforming the highest-scoring American classroom. To find out why, Stevenson and Stigler spent the next decade comparing elementary classrooms in the U.S., China. and Japan. Their epiphany occurred as they watched a Japanese boy struggle with the assignment of drawing cubes in three dimensions on the blackboard. The boy kept at it for forty-five minutes, making repeated mistakes, as Stevenson and Stigler became increasingly anxious and embarrassed for him. Yet the boy himself was utterly unselfconscious, and the American observers wondered why they felt worse than he did. “Our culture exacts a great cost psychologically for making a mistake,” Stigler recalled, “whereas in Japan, it doesn’t seem to be that way. In Japan, mistakes, error, confusion [are] all just a natural part of the learning process.” (The boy eventually mastered the problem, to the cheers of his classmates.) The researchers also found that American parents, teachers, and children were far more likely than their Japanese and Chinese counterparts to believe that mathematical ability is innate; if you have it, you don’t have to work hard, and if you don’t have it, there’s no point in trying. In contrast, most Asians regard math success, like achievement in any other domain, as a matter of persistence and plain hard work. Of course you will make mistakes as you go along; that’s how you learn and improve. It doesn’t mean you are stupid.

Making mistakes is central to the education of budding scientists and artists of all kinds, who must have the freedom to experiment, try this idea, flop, try another idea, take a risk, be willing to get the wrong answer. One classic example, once taught to American schoolchildren and still on many inspirational Web sites in various versions, is Thomas Edison’s reply to his assistant (or to a reporter), who was lamenting Edison’s ten thousand experimental failures in his effort to create the first incandescent light bulb. “I have not failed,” he told the assistant (or reporter). “I successfully discovered 10,000 elements that don’t work.” Most American children, however, are denied the freedom to noodle around, experiment, and be wrong in ten ways, let alone ten thousand. The focus on constant testing, which grew out of the reasonable desire to measure and standardize children’s accomplishments, has intensified their fear of failure. It is certainly important for children to learn to succeed; but it is just as important for them to learn not to fear failure. When children or adults fear failure, they fear risk. They can’t afford to be wrong.

There is another powerful reason that American children fear being wrong: They worry that making mistakes reflects on their inherent abilities. In twenty years of research with American schoolchildren, psychologist Carol Dweck has pinpointed one of the major reasons for the cultural differences that Stevenson and Stigler observed. In her experiments, some children are praised for their efforts in mastering a new challenge. Others are praised for their intelligence and ability, the kind of thing many parents say when their children do well: “You’re a natural math whiz, Johnny.” Yet these simple messages to children have profoundly different consequences. Children who, like their Asian counterparts, are praised for their efforts, even when they don’t “get it” at first, eventually perform better and like what they are learning more than children praised for their natural abilities. They are also more likely to regard mistakes and criticism as useful information that will help them improve. In contrast, children praised for their natural ability learn to care more about how competent they look to others than about what they are actually learning. They become defensive about not doing well or about making mistakes, and this sets them up for a self-defeating cycle: If they don’t do well, then to resolve the ensuing dissonance (“I’m smart and yet I screwed up”), they simply lose interest in what they are learning or studying (“I could do it if I wanted to, but I don’t want to”). When these kids grow up, they will be the kind of adults who are afraid of making mistakes or taking responsibility for them, because that would be evidence that they are not naturally smart after all.

Dweck has found that these different approaches toward learning and the meaning of mistakes – are they evidence that you are stupid or evidence that you can improve? – are not ingrained personality traits. They are attitudes, and, as such, they can change. Dweck has been changing her students’ attitudes toward learning and error for years, and her intervention is surprisingly simple: She teaches elementary-school children and college students alike that intelligence is not a fixed, inborn trait, like eye color, but rather a skill, like bike riding, that can be honed by hard work. This lesson is often stunning to American kids who have been hearing for years that intelligence is innate. When they accept Dweck’s message, their motivation increases, they get better grades, they enjoy their studies more, and they don’t beat themselves up when they have setbacks.

The moral of our story is easy to say, and difficult to execute. When you screw up, try saying this: “I made a mistake. I need to understand what went wrong. I don’t want to make the same mistake again.” Dweck’s research is heartening because it suggests that at all ages, people can learn to see mistakes not as terrible personal failings to be denied or justified, but as inevitable aspects of life that help us grow, and grow up.

And Ryan Holiday echoes these insights:

Too often, convinced of our own intelligence or success, we stay in a comfort zone that ensures that we never feel stupid (and are never challenged to learn or reconsider what we know). It obscures from view various weaknesses in our understanding, until eventually it’s too late to change course. This is where the silent toll is taken.

Each of us faces a threat as we pursue our craft. Like sirens on the rocks, ego sings a soothing, validating song  –  which can lead to a wreck. The second we let the ego tell us we have graduated, learning grinds to a halt. That’s why UFC champion and MMA pioneer Frank Shamrock said, “Always stay a student.” As in, it never ends.

The solution is as straightforward as it is initially uncomfortable: Pick up a book on a topic you know next to nothing about. Put yourself in rooms where you’re the least knowledgeable person. That uncomfortable feeling, that defensiveness that you feel when your most deeply held assumptions are challenged  –  what about subjecting yourself to it deliberately? Change your mind. Change your surroundings.

An amateur is defensive. The professional finds learning (and even, occasionally, being shown up) to be enjoyable; they like being challenged and humbled, and engage in education as an ongoing and endless process.

Larry Ellison recalls a conversation he once had with Bill Gates which exemplified this mentality perfectly:

It was the most interesting conversation I’ve ever had with Bill, and the most revealing. It was around eleven o’clock in the morning, and we were on the phone discussing some technical issue, I don’t remember what it was. Anyway, I didn’t agree with him on some point, and I explained my reasoning. Bill says, “I’ll have to think about that, I’ll call you back.” Then I get this call at four in the afternoon and it’s Bill continuing the conversation with “Yeah, I think you’re right about that, but what about A and B and C?” I said, “Bill, have you been thinking about this for the last five hours?” He said, yes, he had, it was an important issue and he wanted to get it right. Now Bill wanted to continue the discussion and analyze the implications of it all. I was just stunned. He had taken time and effort to think it all through and had decided I was right and he was wrong. Now, most people hate to admit they’re wrong, but it didn’t bother Bill one bit. All he cared about was what was right, not who was right. That’s what makes Bill very, very dangerous.

The truth is, there’s no need to feel defensive about the gaps in your knowledge – because everyone has gaps in their knowledge. Everyone has things that they think they’re right about but are actually wrong about; and everyone has things that just completely confuse them. Simply recognizing that this is the case – that there’s always room to improve your beliefs – can put you in a mindset that’s much more conducive to doing so. Manson’s advice here is:

Hold weaker opinions. Recognize that unless you are an expert in a field, there is a good chance that your intuitions or assumptions are flat-out wrong. The simple act of telling yourself (and others) before you speak, “I could be wrong about this,” immediately puts your mind in a place of openness and curiosity. It implies an ability to learn and to have a closer connection to reality.

In other words, the better you are at maintaining intellectual humility, the more room you’ll have for intellectual growth. As Chris Voss adds:

We must let what we know […] guide us but not blind us to what we do not know; we must remain flexible and adaptable to any situation; we must always retain a beginner’s mind.

And Sam Harris drives the point home:

Wherever we look, we find otherwise sane men and women making extraordinary efforts to avoid changing their minds.

Of course, many people are reluctant to be seen changing their minds, even though they might be willing to change them in private, seemingly on their own terms – perhaps while reading a book. This fear of losing face is a sign of fundamental confusion. Here it is useful to take the audience’s perspective: Tenaciously clinging to your beliefs past the point where their falsity has been clearly demonstrated does not make you look good. We have all witnessed men and women of great reputation embarrass themselves in this way. I know at least one eminent scholar who wouldn’t admit to any trouble on his side of a debate stage were he to be suddenly engulfed in flames.

If the facts are not on your side, or your argument is flawed, any attempt to save face is to lose it twice over. And yet many of us find this lesson hard to learn. To the extent that we can learn it, we acquire a superpower of sorts. In fact, a person who surrenders immediately when shown to be in error will appear not to have lost the argument at all. Rather, he will merely afford others the pleasure of having educated him on certain points.

The superpower analogy is a good one; research has shown that one of the key features of so-called “superforecasters” – i.e. people who are significantly better than average at recognizing trends, predicting future events, and just generally being right about things – is that they are good at incorporating new information into their worldviews and changing their minds as the facts dictate. Tetlock explains:

They tend to be more actively open-minded. They tend to treat their beliefs not as sacred possessions to be guarded but rather as testable hypotheses to be discarded when the evidence mounts against them. That’s [one] way in which they differ from many people. They try not to have too many ideological sacred cows. They’re willing to move fairly quickly in response to changing circumstances.

And this is a key point – not just that these superforecasters are open to changing their minds, but that they’re eager to do so, even when it means sacrificing one of their most central beliefs. By updating their beliefs more quickly, they spend less time being wrong and more time being right. Yudkowsky shares his thoughts on the matter:

I just finished reading a history of Enron’s downfall, The Smartest Guys in the Room, which hereby wins my award for “Least Appropriate Book Title”.

An unsurprising feature of Enron’s slow rot and abrupt collapse was that the executive players never admitted to having made a large mistake. When catastrophe #247 grew to such an extent that it required an actual policy change, they would say “Too bad that didn’t work out – it was such a good idea – how are we going to hide the problem on our balance sheet?” As opposed to, “It now seems obvious in retrospect that it was a mistake from the beginning.” As opposed to, “I’ve been stupid.” There was never a watershed moment, a moment of humbling realization, of acknowledging a fundamental problem. After the bankruptcy, Jeff Skilling, the former COO and brief CEO of Enron, declined his own lawyers’ advice to take the Fifth Amendment; he testified before Congress that Enron had been a great company.

Not every change is an improvement, but every improvement is necessarily a change. If we only admit small local errors, we will only make small local changes. The motivation for a big change comes from acknowledging a big mistake.

As a child I was raised on equal parts science and science fiction, and from Heinlein to Feynman I learned the tropes of Traditional Rationality: Theories must be bold and expose themselves to falsification; be willing to commit the heroic sacrifice of giving up your own ideas when confronted with contrary evidence; play nice in your arguments; try not to deceive yourself; and other fuzzy verbalisms.

A traditional rationalist upbringing tries to produce arguers who will concede to contrary evidence eventually – there should be some mountain of evidence sufficient to move you. This is not trivial; it distinguishes science from religion. But there is less focus on speed, on giving up the fight as quickly as possible, integrating evidence efficiently so that it only takes a minimum of contrary evidence to destroy your cherished belief.

I was raised in Traditional Rationality, and thought myself quite the rationalist. I switched to Bayescraft (Laplace/Jaynes/Tversky/Kahneman) in the aftermath of… well, it’s a long story. Roughly, I switched because I realized that Traditional Rationality’s fuzzy verbal tropes had been insufficient to prevent me from making a large mistake.

After I had finally and fully admitted my mistake, I looked back upon the path that had led me to my Awful Realization. And I saw that I had made a series of small concessions, minimal concessions, grudgingly conceding each millimeter of ground, realizing as little as possible of my mistake on each occasion, admitting failure only in small tolerable nibbles. I could have moved so much faster, I realized, if I had simply screamed “OOPS!”

And I thought: I must raise the level of my game.

There is a powerful advantage to admitting you have made a large mistake. It’s painful. It can also change your whole life.

It is important to have the watershed moment, the moment of humbling realization. To acknowledge a fundamental problem, not divide it into palatable bite-size mistakes.

Do not indulge in drama and become proud of admitting [how ignorant and flawed you are, and how prone to committing] errors. It is surely superior to get it right the first time. But if you do make an error, better by far to see it all at once. Even hedonically, it is better to take one large loss than many small ones. The alternative is stretching out the battle with yourself over years. The alternative is Enron.

Since then I have watched others making their own series of minimal concessions, grudgingly conceding each millimeter of ground; never confessing a global mistake where a local one will do; always learning as little as possible from each error. What they could fix in one fell swoop voluntarily, they transform into tiny local patches they must be argued into. Never do they say, after confessing one mistake, I’ve been a fool. They do their best to minimize their embarrassment by saying I was right in principle, or It could have worked, or I still want to embrace the true essence of whatever-I’m-attached-to. Defending their pride in this passing moment, they ensure they will again make the same mistake, and again need to defend their pride.

Better to swallow the entire bitter pill in one terrible gulp.

He sums up:

[One of the core virtues of rationality] is lightness. Let the winds of evidence blow you about as though you are a leaf, with no direction of your own. Beware lest you fight a rearguard retreat against the evidence, grudgingly conceding each foot of ground only when forced, feeling cheated. Surrender to the truth as quickly as you can. Do this the instant you realize what you are resisting; the instant you can see from which quarter the winds of evidence are blowing against you. Be faithless to your cause and betray it to a stronger enemy.

Now, obviously this doesn’t mean that you should just be constantly changing your views at the drop of a hat. If you’ve accumulated mountains of evidence in support of an idea over the course of several years, then just encountering one new piece of evidence against it shouldn’t automatically be a deal-breaker for that idea. You should weigh your beliefs in proportion to the overall balance of evidence. What it does mean, though, is that when that balance of evidence shifts in favor of a new idea, you shouldn’t spend a second longer clinging to the old flawed one than you have to – even if it’s a big one (in fact, especially if it’s a big one).

Of course, there’s a reason why the biggest beliefs are the hardest ones to let go of. If you’ve got a belief that’s so central to your thinking that it’s practically a cornerstone of your identity, changing it can feel like giving up who you are as a person. Not only that, but these kinds of foundational beliefs are often deeply tied into things like social identity and status – so if you’re a lifelong churchgoer who suddenly stops believing, for instance, you have to worry about the fallout that will inevitably come with no longer being part of that community. Or if you’re part of a social circle whose members are all liberals and you become a conservative, then good luck dealing with all the backlash on Facebook. Beliefs don’t just exist in a vacuum; there are often all kinds of tribal implications that accompany them, and the prospect of facing social marginalization, ridicule, or ostracism for your beliefs can be daunting.

But there are ways of dealing with this, too. As McRaney points out, one of the best ways to ease the process of making major ideological shifts is to have more than one tribe that you belong to. So for instance, if you happen to identify as a libertarian, you might also identify as a Catholic, a feminist, a transhumanist, a Harry Potter fan, a jazz lover, or any number of other affiliations. By spreading your identity across multiple domains in this way, you won’t feel as much pressure to conform precisely to the groupthink of one particular tribe – because even if you face pushback from one group for diverging from its orthodoxy, you’ll know that your alignment with that group only makes up one facet of your identity – it doesn’t define the entirety of who you are – and you’ll still have the safety net of being able to participate fully in the other communities you’re part of. By having a wide variety of ideological (and non-ideological) interests, you’ll be able to operate more freely in your beliefs, because you’ll have other ways of defining your identity aside from your membership in one particular tribe. That’s why McRaney’s advice is to move in as many circles as possible; the less you pigeonhole yourself, the freer you are to expand your horizons.

(Incidentally, this is the same reason why it’s easier to keep young people out of gangs if they’re also members of a sports team or a youth group or whatever – just giving them an alternative context in which to define their social standing and identity can prevent them from wanting to define themselves solely through their membership in a gang. It’s the same reason why someone who’s in a toxic relationship can have an easier time leaving if they’ve recently started spending time with a new group of friends. And it’s also a good reason why, if you encounter someone with a particularly repugnant or unpopular belief and you want to change their mind about it, you’re more likely to succeed by befriending them and welcoming them into your tribe than by trying to shame them and marginalize them even further than they already are. If you want to get someone to abandon their ideology, you have to show them that there’s an alternative ideology that they can feel like they belong to instead. Or as Stephanie Lepp puts it: “If you’re going to ask someone to jump ship, you have to give them a better ship to jump to; otherwise, what’s the incentive?”)

Another approach, of course, is just to reject the whole concept of having a tribal-based identity altogether, and be comfortable having your own unique beliefs as an individual rather than trying to define yourself in terms of which ideological tribes you’re part of. As Jacob Falkovich writes:

Paul Graham […] recommends “keeping your identity small.” If one self-identifies as ‘progressive’ or ‘anti-progressive,’ any dispute over policy and science on which an official ‘progressive position’ develops can become a threat to one’s identity. […] Labels of any sort are a detriment to clear thinking. In the absence of a position forced on someone by their identity, a person is free to choose a position based on logic and the available evidence.

And Graham himself elaborates:

I finally realized today why politics and religion yield such uniquely useless discussions.

As a rule, any mention of religion on an online forum degenerates into a religious argument. Why? Why does this happen with religion and not with Javascript or baking or other topics people talk about on forums?

[…]

I think what religion and politics have in common is that they become part of people’s identity, and people can never have a fruitful argument about something that’s part of their identity. By definition they’re partisan.

Which topics engage people’s identity depends on the people, not the topic. For example, a discussion about a battle that included citizens of one or more of the countries involved would probably degenerate into a political argument. But a discussion today about a battle that took place in the Bronze Age probably wouldn’t. No one would know what side to be on. So it’s not politics that’s the source of the trouble, but identity. When people say a discussion has degenerated into a religious war, what they really mean is that it has started to be driven mostly by people’s identities.

Because the point at which this happens depends on the people rather than the topic, it’s a mistake to conclude that because a question tends to provoke religious wars, it must have no answer. For example, the question of the relative merits of programming languages often degenerates into a religious war, because so many programmers identify as X programmers or Y programmers. This sometimes leads people to conclude the question must be unanswerable – that all languages are equally good. Obviously that’s false: anything else people make can be well or badly designed; why should this be uniquely impossible for programming languages? And indeed, you can have a fruitful discussion about the relative merits of programming languages, so long as you exclude people who respond from identity.

More generally, you can have a fruitful discussion about a topic only if it doesn’t engage the identities of any of the participants. What makes politics and religion such minefields is that they engage so many people’s identities. But you could in principle have a useful conversation about them with some people. And there are other topics that might seem harmless, like the relative merits of Ford and Chevy pickup trucks, that you couldn’t safely talk about with others.

The most intriguing thing about this theory, if it’s right, is that it explains not merely which kinds of discussions to avoid, but how to have better ideas. If people can’t think clearly about anything that has become part of their identity, then all other things being equal, the best plan is to let as few things into your identity as possible.

Most people reading this will already be fairly tolerant. But there is a step beyond thinking of yourself as x but tolerating y: not even to consider yourself an x. The more labels you have for yourself, the dumber they make you.

The idea here is that you should strive to be “post-partisan;” that is, you shouldn’t care which side an idea comes from, or whose narrative it supports – all you should care about is whether it’s true. You shouldn’t decide in advance that there are certain conclusions you have to reach before you’ve even examined all the facts; you should figure out what all the facts are first, and then your conclusions should follow from them. Ideally, specific beliefs shouldn’t be something that you even really consider to be “yours” at all (at least not in any kind of permanent, identity-defining sense); you should simply regard them as things that you happen to be holding because they’re the best ones available at the moment, but which you could swap out for better ones at any time. And in fact, at the most fundamental level, you shouldn’t even consider your beliefs to be something that you can choose for yourself in the first place. If you’re doing it right, then your worldview should simply be a condition imposed upon you by the facts of the world; and when you encounter new facts, your worldview should helplessly change to accommodate them, regardless of whether they contradict what you might prefer to be true. In Thomas Jefferson’s words: “We [must] not [be] afraid to follow the truth wherever it may lead.”

This mentality, of finding out what the facts are and then accepting whatever truths they point to, will often lead to combinations of beliefs that don’t fit neatly under one particular ideological label. But truth is non-denominational – it doesn’t constrain itself to one particular side 100% of the time – so why should you? There’s no law of nature that says you absolutely have to adhere fully to one of the pre-constructed ideologies that have already been defined by others. You can have your own set of beliefs that combines good ideas from a variety of sources and integrates them into a unique worldview. This doesn’t mean that you can’t still identify as a liberal or a conservative or a Christian or a Muslim or whatever if your beliefs still happen to coincide with most of those ideologies’ central doctrines; but it’s not an all-or-nothing thing. You don’t have to just pick one pre-assembled worldview from the menu. You can choose the buffet, so to speak, and assemble your own. The most important thing is just that whatever labels you might adopt, your core ideological identity above all others shouldn’t be “liberal” or “conservative” or “Christian” or “Muslim” or anything like that, but simply “seeker of truth.”

Being able to break out of the tribalist mentality and evaluate ideas solely for their truth value – and in a broader sense, being able to avoid all the pitfalls of motivated reasoning in general – means that you have to be able at times to mentally put a little distance between yourself and the topic, to evaluate things from a more detached “outside view” and decouple your emotional investments from your intellectual judgments. As Gregory Hays writes:

The discipline of perception requires that we maintain absolute objectivity of thought: that we see things dispassionately for what they are.

And Simler and Hanson elaborate:

An ideal political Do-Right will be the opposite of an ideologue. Because Do-Rights are concerned only with achieving the best outcomes for society, they won’t shy away from contrary arguments and evidence. In fact, they’ll welcome fresh perspectives (with an appropriately critical attitude, of course). When a smart person disagrees with them, they’ll listen with an open mind. And when, on occasion, they actually change one of their political beliefs, they’re apt to be grateful rather than resentful. Their pride might take a small hit, but they’ll swallow it for the sake of the greater good. Think of an effective business leader, actively seeking out different perspectives in order to make the best decisions – that’s how a Do-Right would consume political information.

As we’ve been discussing, though, this is easier said than done. They continue:

But of course, that’s not at all how real voters behave. Most of us live quite happily in our political echo chambers, returning again and again to news sources that support what we already believe. When contrary opinions occasionally manage to filter through, we’re extremely critical of them, although we’re often willing to swallow even the most specious evidence that confirms our views. And we’re more likely to engage in political shouting matches, full of self-righteous confidence, than to listen with the humility that we may (gasp!) be wrong.

The fact that we attach strong emotions to our political beliefs is another clue that we’re being less than fully honest intellectually. When we take a pragmatic, outcome-oriented stance to a given domain, we tend to react more dispassionately to new information. We do this every day in most areas of our lives, like when we buy groceries, pack for a vacation, or plan a birthday party. In these practical domains, we feel much less pride in what we believe, anger when our beliefs are challenged, or shame in changing our minds in response to new information. However, when our beliefs serve non-pragmatic functions, emotions tend to be useful to protect them from criticism.

Yes, the stakes may be high in politics, but even that doesn’t excuse our social emotions. High-stakes situations might reasonably bring out stress and fear, but not pride, shame, and anger. During a national emergency, for example, we hope that our leaders won’t be embarrassed to change their minds when new information comes to light. People are similarly cool and dispassionate when discussing existential risks like global pandemics and asteroid impacts – at least insofar as those risks are politically neutral. When talk turns to politicized risks like global climate change, however, our passions quickly return.

All of this strongly suggests that we hold political beliefs for reasons other than accurately informing our decisions.

Unfortunately, as Mark Hill points out, the incentive structures of most ideological debates nowadays are largely designed to inflame their participants’ emotions as much as possible rather than inhibiting them:

There appears to be a horrible process that works like this:

A. In order to want to learn more about political issues, you must be enthusiastic about politics;

B. Enthusiasm about politics means you are more likely to be emotionally invested in the issues;

C. Emotional investment in the issues means a more negative attitude toward anyone who disagrees;

D. A negative attitude toward someone means being more dismissive of his point of view and being less open to changing your mind based on anything he says.

In the world of psychology, they call this attitude polarization; the more times the average person spends thinking about a subject, the more extreme his position becomes — even if he doesn’t run across any new information. Simply repeating your beliefs to yourself makes those beliefs stronger.

And it gets even worse when we wind up in a group — say, on an Internet message board full of people who agree with us, where we can all congratulate each other on being right. Researchers call that group polarization (in public — in private, they call it a “circle jerk”).

Of course, once you get to the point where you’re rooting so hard for one side of an issue that you’re just short of painting your chest in team colors, then all that time spent reading up on the issues stops being about becoming an informed citizen and becomes more about accumulating ammunition for the next argument.

It’s understandable that this happens so often. After all, when it comes to areas like politics and religion, the issues at hand are often ones that affect millions of people, and can even be matters of life and death. When the stakes are that high, how can you not get emotionally invested?

The key point here, though, is that there’s a difference between getting emotionally invested in an important issue, and getting emotionally invested in the specific arguments and beliefs you hold concerning that issue. A lot of people conflate the two, and think that if they’re passionate about their terminal values – life, liberty, justice, security, equality, etc. – then they should be equally staunch in their beliefs about how best to achieve those goals. But in fact, it’s the other way around; if you’re really committed to these ultimate goals, then you shouldn’t particularly care which policy provides the best means of achieving them, just that they get achieved in the best way possible. Your ideology should just be an instrument for accomplishing what you really care about – a means to an end – not something to defend for its own sake. Do school vouchers actually produce better educational results? Maybe so, maybe not; but either way you should be willing to embrace the answer, because the quality of the education should be what you care about, not the means. Does socialized medicine provide better health outcomes at a more efficient cost? Maybe so, maybe not; but you should gladly embrace whichever answer is true because you shouldn’t be emotionally invested in the privatization or socialization of medicine as an ends in itself, you should be emotionally invested in healthcare that’s effective and affordable. What about gun control? Drug legalization? It’s the same in each of these cases. If you’ve been advocating for policy X because you believe it’s the best way to achieve a particular goal, but then you discover that policy Y would actually be a more effective way of achieving it, you should be happy to drop policy X in a heartbeat, because all that should really matter to you is figuring out the best way to achieve the goal. Being passionate about worthy causes is a good thing – the last thing we need in the world is more apathetic cynics – but you should make sure that the things you’re getting emotionally invested in are your terminal values, not your individual object-level arguments for fulfilling them. For those arguments, dispassion is the key.

There are sometimes relatively easy ways to detect that your emotions might be affecting your judgment on a particular topic. Like if you notice yourself bristling at the very thought of the other side – if even the mere mention of the word “Obamacare” or “pro-life” or “atheist” or “Trump” is enough to make your blood start to boil – then you should be aware that your judgment on the matter might obviously be a bit biased. (This doesn’t necessarily mean that your bias is unwarranted or wrong, mind you – but it does mean that you’ll have to adjust for it if you don’t want to miss whatever insights the other side might actually have.) Similarly, if there’s a controversy in the news and you find yourself wanting to leap to one side’s defense before you even know all the details, it’s a good sign that your judgment has been at least partly compromised by emotional considerations.

It’s not always that easy to detect your own biases, though. All too often, you can think you’re being completely objective and dispassionate in your judgments, when really you’re unknowingly engaged in motivated reasoning. Even so, it’s possible to outsmart your own biases and turn the fear of being wrong to your advantage, by intentionally manipulating your own incentives to compel pure truth-seeking. If you can raise the stakes for being wrong to such a degree that they outweigh all your other considerations (like social signaling, emotional catharsis, etc.), you can minimize your self-deception and make it so that you have no choice but to be brutally honest with yourself about what you really believe (as opposed to what you’ve merely convinced yourself you believe, or what you merely wish were true). One way to do this is to put yourself into situations where it’s actually made explicit that factual accuracy is the only measure by which you’re being judged – not winning the argument or scoring points for your side, but just being able to assess reality as objectively as possible and make the most accurate projections based on that assessment. Tetlock proposes this in the form of organized competitions, implemented on a national scale:

I want to dedicate the last part of my career to improving the quality of public debate. And I see forecasting tournaments as a tool that can be used for that purpose. I believe that if partisans in debates felt that they were participating in forecasting tournaments in which their accuracy could be compared against that of their competitors, we would quite quickly observe the depolarization of many polarized political debates. People would become more circumspect, more thoughtful and I think that would on balance be a better thing for our society and for the world. So I think there are some tangible things in which the forecasting technology can be used to improve the technology of public debate, if only we were open to the possibility.

But there’s no reason why these kinds of forecasting competitions have to just be limited to formal, organized events. Simply having informal contests with your peers, and establishing norms of discourse in which success is defined solely by factual accuracy, can shift your collective mindset into a more dispassionate and constructive one. If everyone in the group stops thinking they can win points merely by preaching to the choir or antagonizing the other side – if the only way of winning prestige is to have the most objective and accurate view of the world – then they’ll be less inclined to waste their efforts on self-indulgent gestures, and more inclined to do their homework and figure out where the truth really lies.

If you’re feeling particularly competitive, another way of rigging your own incentives to minimize self-deception (which might not be practical for everyone but which is worth mentioning here anyway just for illustrative purposes) is to put your money where your mouth is – i.e. to place literal bets on the propositions you’re arguing over. Just risking your intellectual reputation on an argument is one thing; even if you’re wrong, you can often still rationalize and make excuses for yourself. But putting yourself at risk of an actual, tangible financial loss – even if it’s just a small one – can have a certain way of sharpening your thinking and forcing you to be more honest with yourself about whether you really fully believe what you’re saying, or whether you’re just exaggerating your arguments for effect. “I’m 100% sure” can quickly turn into “Eh, maybe there’s some margin for error” if someone suddenly challenges you to put your hard-earned cash on the line. (As Alex Tabarrok puts it: “A bet is a tax on bullshit.”) And a lot of times, you’ll find that you don’t even have to make the bet at all; just considering what you would do if someone did challenge you to put money on it can make you realize that you aren’t quite as confident in your argument as you thought you were. So if you believed very strongly that, say, the president’s new economic plan would be such a disaster that it would lead to a financial crash within the next five years, or that legalizing gay marriage would lead to the end of straight marriage, or whatever other contentious view you wanted to argue, you could simply ask yourself what kind of odds you would hypothetically be willing to lay on those predictions if you were actually forced to do so. Would you be willing to offer 3-to-1 odds (i.e. you’d lose three times more if you were wrong than you’d win if you were right)? What about 100-to-1 odds? 1000-to-1? (Alternatively, you could try this other creative method called de Finetti’s Game.) You wouldn’t ever have to actually make these bets, of course; but just thinking about the issues in terms of confidence ratios – weighing your certainty against your uncertainty and putting a percentage on your confidence level rather than just a straight “yes” or “no” – can give you a much more nuanced understanding of things.

And this is the real bottom line here that all these techniques and thought exercises are designed to serve. If you allow yourself to get swept up in the kind of absolutist thinking that dominates so much of modern discourse – reducing every issue to an oversimplified black-and-white binary – you’ll end up missing all the nuances that make the issue a debate in the first place. It may feel more satisfying on a gut level to claim 100% certainty in your beliefs – and it may score you more cheap points on social media to frame every issue in absolutist terms. But being able to think probabilistically – to never presume 100% certainty on any issue, but instead to assign different degrees of probability to each of your beliefs – is far more likely to give you an accurate worldview. It’s true that black and white areas do exist. There are some things that really are absolute. But the point is that they aren’t the only areas that exist – there are all kinds of grey areas in between – and unless you can train yourself to think in greyscale, rather than thinking exclusively in black and white, you’ll never be able to understand the whole picture.

(See Nate Soares’s insightful post on the subject here.)

Speaking from personal experience, I can say that thinking probabilistically (when I actually manage to do it) has enabled me to more readily accept when my opponents make good points – because instead of feeling forced to say something like “That’s a good point; I thought X was a great idea but now I realize that it’s a terrible idea” (which is pretty difficult, to make such dramatic 180-degree reversals of position instantaneously), I can say something more like “That’s a good point; I still mostly think favorably of X but now I’ve adjusted my position a few percentage points in the other direction, and if I continue to encounter more good evidence against it I’ll adjust my percentages even more.” That way, I can acknowledge the strength of their point while still accounting for the fact that all the prior evidence I’ve accumulated over the years still weighs mostly in X’s favor on balance.  And this has been a huge help for me when it comes to navigating these kinds of conversations.

But as it turns out, this isn’t just a one-way benefit. Adopting this kind of probabilistic approach doesn’t just make me more receptive to other people’s good ideas; it also has the fortunate side effect of increasing receptiveness in the other direction as well, leading to a more productive conversation all around. As Kathryn Schulz writes:

If being contradicted or facing other people’s categorical pronouncements tends to make listeners stubborn, defensive, and inclined to disagree, open expressions of uncertainty can be remarkably disarming. To take a trivial but common example, I once sat in on a graduate seminar in which a student prefaced a remark by saying, “I might be going out on a limb here.” Before that moment, the class had been contentious; the prevailing ethos seemed to be a kind of academic one-upmanship, in which the point was to undermine all previous observations. After this student’s comment, though, the room seemed to relax. Because she took it upon herself to acknowledge the provisionality of her idea, her classmates were able to contemplate its potential merit instead of rushing to invalidate it.

These kinds of disarming, self-deprecating comments (“this could be wrong, but…” “maybe I’m off the mark here…”) are generally considered more typical of the speech patterns of women than men. Not coincidentally, they are often criticized as overly timid and self-sabotaging. But I’m not sure that’s the whole story. Awareness of one’s own qualms, attention to contradiction, acceptance of the possibility of error: these strike me as signs of sophisticated thinking, far preferable in many contexts to the confident bulldozer of unmodified assertions. Philip Tetlock, too, defends these and similar speech patterns (or rather, the mental habits they reflect), describing them, admiringly, as “self-subversive thinking.” That is, they let us function as our own intellectual sparring partner, thereby honing – or puncturing – our beliefs. They also help us do greater justice to complex topics and make it possible to have riskier thoughts. At the same time, by moving away from decree and toward inquiry, they set the stage for more open and interesting conversations. Perhaps the most striking and paradoxical effect of the graduate student’s out-on-a-limb caveat was that, even as it presented her idea as potentially erroneous, it caused her classmates to take that idea more seriously: it inspired her listeners to actually listen.

Ultimately, it’s not hard to understand why the student’s classmates reacted as they did. Despite the common assumption that speaking with more certainty is always the best way to appear more credible, there are real limits to this approach. Sure, it might work for an audience that thinks the topic being debated is a simple one that should have a clear and unambiguous answer. But if the topic is a complex one with a lot of nuances, then the person with the most credibility will actually tend to be the one who recognizes and acknowledges those nuances, not the person who acts as though they don’t exist and the answer is obvious. Most people may not be aware of this dynamic at a conscious level when making their own arguments (hence the ubiquity of people exaggerating their level of certainty on seemingly every issue), but I think they often do realize it, if only subconsciously, when they hear the arguments of others. And I think they’re right to do so – because when it comes to the most complex issues, those with more nuanced opinions really do tend to have a better track record of actually understanding them accurately. Having the ability to differentiate between the easy, obvious questions and the hard, complex ones is itself a key intellectual faculty; and if someone consistently displays a total inability to recognize that distinction – if they not only act like every contentious issue is easy and obvious, but can’t seem to understand how anyone could possibly have any reason for thinking otherwise – then that can often be a red flag that their judgment isn’t actually as good as they think it is (and should accordingly be taken less seriously).

André Gide famously wrote:

Trust those who seek the truth, but doubt those who say they have found it.

If someone claims 100% certainty in their worldview – if they say that they know all they need to know – then it’s a good bet that their worldview is a grossly oversimplified one that doesn’t accurately reflect reality. It’s not that they’ve actually learned all they need to know; it’s just that they’ve chosen to stop learning. The real truth is that nobody knows everything (or even most things) – it’s not even physically possible – and one of the defining features of intellectual maturity is the ability to recognize this. If you’ve got certain beliefs that you’ve researched and found to be strongly supported by the best available evidence, then sure, you can assign a high level of confidence to those beliefs. (Yes, the moon really is made of rock and not cheese.) But if there are things that you haven’t quite learned enough about yet to justify having a confident opinion, then you should be honest with yourself about that; you shouldn’t just assert a confident opinion anyway for the sake of appearing more authoritative. A lot of people seem to think that they have to have a decisive opinion on every issue, and that if they don’t, they’ll look ignorant. But not every issue has an immediately knowable answer. If ask you how many fingers I’m holding up behind my back, you won’t look more knowledgeable if you assert in complete seriousness that you know for a fact that I’m holding up four fingers – you’ll just look like a buffoon. The correct answer to questions like that is “I don’t know.” So if you find yourself in a situation where all the facts aren’t in yet and the answers aren’t clear, you shouldn’t just pick one prematurely and decide that that’s the answer you’re going to go with; you should be willing to suspend judgment until you know all that you need to know, and only then make your conclusion about where the best evidence is pointing. You don’t just have to answer “definitely yes” or “definitely no” to every question. You can answer things like “I’d say about 70% yes” or “The evidence doesn’t seem conclusive one way or the other to me just yet” – and those can be more useful answers than if you’d just uncritically planted your flag in one side or the other.

Now, this doesn’t mean that you should use this as an excuse to avoid a particular topic because you might not like the conclusion. You’ll sometimes notice, for instance, that people do this when it comes to scientific questions where the answer might contradict their worldview. Instead of rolling up their sleeves and digging into the research, they’ll just say “Look, I’m not a scientist; I don’t have enough expertise to know all the technical details I’d need to answer this question for sure.” It’s good that they’re admitting their intellectual blind spots, of course, but what’s not so good is when they continue to leave those questions unanswered on purpose, intentionally neglecting to address those blind spots so that they can avoid reaching a conclusion that they don’t want to reach. They’ll rationalize that as long as the question is still open, there’s still room for their preferred conclusion to potentially be true – much like the person who refuses to go to the doctor because not knowing the state of their health allows them to continue believing that they’re perfectly healthy – and so they’ll just go right on avoiding the truth. They use the phrase “I don’t know” as an avoidance mechanism.

But “I don’t know” shouldn’t be an indication of which questions you want to avoid; it should be an indication of which questions you need to delve deeper into. If you can’t quite figure out what the right answer is, you should want to get down to the bottom of the mystery, not avoid it. As Jostein Gaarder says:

A philosopher knows that in reality he knows very little. That is why he constantly strives to achieve true insight. Socrates was one of these rare people. He knew that he knew nothing about life and the world. And now comes the important part: it troubled him that he knew so little.

A philosopher is therefore someone who recognizes that there is a lot he does not understand, and is troubled by it.

And as Alexander describes it:

I don’t know how the internal experience of curiosity works for other people, but to me it’s a sort of itch I get when the pieces don’t fit together and I need to pick at them until they do. I’ve talked to some actual scientists who have this way stronger than I do. An intellectually curious person is a heat-seeking missile programmed to seek out failures in existing epistemic paradigms.

You should always want to expand the scope of your knowledge; and whenever you say “I don’t know,” you should always want to follow it up with “…yet.” That’s what becoming more knowledgeable means – it’s a never-ending process of, as Tetlock and Gardner put it, “gradually getting closer to the truth by constantly updating [your beliefs] in proportion to the weight of the evidence.” It’s not always easy – sometimes you have to make dramatic shifts in your thinking, and sometimes you even have to give up core beliefs that you’ve spent years becoming deeply invested in – but you can never improve your worldview unless you’re willing to change it; and you can never make big improvements to your worldview unless you’re willing to make big changes. Again, it all comes back to taking your ego out of the equation. If you can do that – if you can simply learn to respond to good counterarguments with statements like “Oh yeah, my bad, I actually think you’re right about that point” – then you can be a lot more nonchalant about effortlessly changing your views when appropriate, and it won’t feel like a big deal, either to yourself or to your discussion partners. After all, it’s not that you’re having to painfully admit that you were wrong and therefore stupid; it’s simply that you’re taking a belief that was perfectly reasonable given the information available to you at the time, and updating it to an even more accurate view now that you have access to new information. (For some reason, calling it “updating” seems to make it easier than calling it “changing your mind.”) You’ve been following the optimal epistemic process the whole time, so why should you be ashamed like you’ve done something wrong? If anything, you should be proud of your open-mindedness. (As Cowen suggests, you might imagine yourself as “the best person in the world at listening to advice.”) If you can learn to pride yourself on your ability to update your knowledge in this way – if you can train yourself, as Sam Bowman writes, “to internalise the virtue of open-mindedness so that changing your mind makes you feel just as good as being ideologically consistent once did” – then that’s a good first step to becoming wiser on a whole other level than you were before.

Peter Watts provides a parable:

We climbed this hill. Each step up we could see farther, so of course we kept going. Now we’re at the top. […] And we look out across the plain and we see this other tribe dancing around above the clouds, even higher than we are. Maybe it’s a mirage, maybe it’s a trick. Or maybe they just climbed a higher peak we can’t see because the clouds are blocking the view. So we head off to find out – but every step takes us downhill. No matter what direction we head, we can’t move off our peak without losing our vantage point. So we climb back up again. We’re trapped on a local maximum.

But what if there is a higher peak out there, way across the plain? The only way to get there is to bite the bullet, come down off our foothill and trudge along the riverbed until we finally start going uphill again. And it’s only then you realize: Hey, this mountain reaches way higher than that foothill we were on before, and we can see so much better from up here.

But you can’t get there unless you leave behind all the tools that made you so successful in the first place. You have to take that first step downhill.

Continued on next page →