Ideas and Ideologies

IIIIIIIVVVIVIIVIIIIXXXIXIIXIIIXIVXVXVIXVIIXVIII
[Separate-page view]

I.

There’s an old saying that goes: “Small minds discuss people. Average minds discuss events. Great minds discuss ideas.” It’s not exactly the kind of principle you can take as an absolute (at least not if you want to avoid seeming pretentious); obviously, discussing people and events can be plenty valuable sometimes. Still, I think the basic point that the saying is trying to convey – that it’s important to think about things on the level of the underlying concepts, not just the surface-level stuff – is a good one; so I like it as something to try and aspire to, at least, if not necessarily something to judge others by. If nothing else, I like that it so clearly draws the distinction between talking about people and events versus talking about ideas, because as soon as you put a label on it like that, you really start to notice just how little time we actually do spend talking about ideas. We talk about people and events constantly – you can turn on your TV or computer every day and spend hours seeing all the latest news stories and political scandals. And there are plenty of people who do exactly that; being an “information junkie” is the fashionable thing to do nowadays. But I think there’s a difference between information and knowledge; and having a head full of information isn’t the same thing as having a head full of ideas or expertise. Obsessively scrolling through Twitter and following every piece of Washington gossip won’t make you more knowledgeable in the fields of economics or foreign policy than spending that same amount of time reading good books on those subjects. Staying up to the minute on every new consumer technology gimmick won’t make you more knowledgeable in the fields of telecommunications or software development than spending the same amount of time actually learning the theory behind those technologies. Being “well-informed,” even maximally well-informed, doesn’t actually make you more knowledgeable in any real sense if the things you’re well-informed about don’t have any deep significance outside of their immediate context.

I probably don’t need to give a long list of examples here; you can see what I mean for yourself. Watch a random CNN segment from ten years ago – or hell, even ten weeks ago – and see how much of it is information that you think is still important for you to have in your head today. Chances are (barring some major historical event that you’d have heard about whether you closely followed the news or not – i.e. something that wouldn’t properly be considered part of the usual “news cycle” at all), it will be almost none of it. The facts will be outdated; the people being discussed will no longer be relevant; and a disproportionate share of the coverage will simply be dedicated to explaining ever-finer bits of minutiae – what the campaign manager is saying about the latest poll numbers out of New Hampshire, whether the press secretary might be stepping down and who their potential replacements could be, etc. – not necessarily to explaining why these minutiae should change your broader understanding of the world in any meaningful way. (Hence Nassim Taleb’s quip: “To be completely cured of newspapers, spend a year reading the previous week’s newspapers.”)

The same goes for most social media and blog discussions. As Shane Parrish points out, when most of what you see in your feed is coming from people who are manically churning out dozens of posts and Tweets per day in response to every little thing that happens, chances are you aren’t getting a whole lot of deep ideas from them; for the most part, all you’re getting are reactions – people’s most surface-level responses to whatever happens to be right in front of them at the moment.

(Case in point: As I’m writing this in mid-2018, the headline everybody’s obsessing over on CNN and Twitter is “Senator’s Wife Fires Back at White House Aide Who Mocked Her Husband.” We’ll see if this story is still important enough to dominate the national conversation by the time I publish this post, but something tells me it won’t be.)

I don’t entirely blame the news for being the way it is, mind you – as Parrish notes, “news is by definition something that doesn’t last.” Nor do I blame other outlets like social media and constantly-updating blogs for providing such transient content themselves; that’s what they were designed for. It’s not so much that I’m bothered by the fact that so little of this information content will matter ten years from now – it’s more the fact that we choose to dedicate such a disproportionate amount of our mental bandwidth and public discussion to it, often at the expense of deeper and more enduring forms of knowledge. I obviously don’t think that all news coverage is a waste of time, or that we should stop paying attention to it altogether (especially if it’s something major like a war or a pandemic). It should go without saying that knowing what’s happening in the world is a necessary first step to understanding it. But I do think it’s worth noticing what a remarkable proportion of news coverage – as well as online commentary, social media conversations, and all the other information feeds we engage with on a daily basis – is just “empty calories,” so to speak. It only tells us “what’s happening in the world” in the most literal, superficial sense; it doesn’t help us understand the underlying phenomena that give rise to these surface-level events, or give us any deeper insight into the broader forces at work.

I suppose part of the reason for this lack of depth is just that the superficial things are so much more accessible, and easier to understand, and easier to bring into a conversation, than the more complex ideas underlying them. It’s easier to have a strong opinion on a politician making some dumb gaffe than it is to determine whether her fiscal policies are fundamentally sound. It’s easier to condemn a religious figure for cheating on his wife than it is to explain why his religious worldview represents a flawed basis for morality. People like being able to weigh in on the issues of the day – we all want to feel like we’re smart and informed and can make a contribution to the conversation – and consequently, whichever issues of the moment allow the greatest number of people to jump into the conversation and take a strong stance are the ones that inevitably get all the attention. But because most people don’t have a deep store of esoteric knowledge about complex socio-political, economic, and philosophical issues, what this means is that most of the topics that dominate the social conversation are the ones that can appeal to the lowest common denominator – i.e. the shallowest and most superficial topics.

There’s actually a term for this phenomenon, it turns out: Parkinson’s Law of Triviality. What it says is that the amount of discussion generated by a particular topic will be inversely proportional to the complexity of that topic; in other words, the more complicated and difficult to understand a topic is, the less people will want to take a stance on it, but the simpler it is to understand, the more people will be willing to weigh in (and to do so vocally and passionately). This is also called “bikeshedding,” after the example given by C. Northcote Parkinson himself, in which a committee is given the task of approving plans for a 300-million-dollar nuclear power plant, and rather than spending their time debating the cost and proposed design of the reactor itself – a tremendously important issue – they pass the resolution in two minutes (because after all, nuclear reactors are expensive and hard to understand), and instead get hung up on the much easier-to-understand issue of a proposed bike shed to be built nearby, debating for hours over what type of materials should be used, whether the few hundred dollars to build the bike shed could be better appropriated elsewhere, and so forth. As the Wikipedia article summarizes it: “A reactor is so vastly expensive and complicated that an average person cannot understand it, so one assumes that those who work on it understand it. On the other hand, everyone can visualize a cheap, simple bicycle shed, so planning one can result in endless discussions because everyone involved wants to add a touch and show personal contribution.”

Unfortunately, when it comes to the really important topics in our society – social issues, religion, politics, etc. – I feel like a lot of the conversations we’re having are just glorified bikeshedding. We’re happy to spend hours every day talking about what the world’s major figures and groups have been up to lately, and what events have been making headlines; but like the nuclear power plant in the analogy, the fundamental ideas and philosophies motivating those people and driving those events are simply glossed over. It’s assumed that they’ve already been taken care of. Whether in news reports, blog posts, or social media discussions, it’s simply taken for granted that everyone has already figured out exactly where they stand on the ideas, worked out which philosophies they believe are the correct ones, and sorted themselves into opposing sides on every possible topic – and that therefore all that’s left to do is just to set the opposing sides loose against each other and start keeping score, in as much detail as possible, of which side is winning and which side is losing on any given day.

The thing is, though, I don’t think this assumption is actually true. I don’t think most people actually have thought through all the exact details of their own ideologies – not entirely. They may have convinced themselves that they have a firm grasp on things because they’ve read a lot of articles and watched a lot of videos on the relevant subjects (see Will Schoder’s video here for a great explanation of how this can happen). But I think if you asked them to take out a pen and paper and outline the framework of their worldview from the ground up – to explain their exact stance on every single issue (along with their justifications) from first principles, without referring to some other source of authority – most people wouldn’t be able to do it; or at least, they’d have to do a lot of thinking about it first. There would be a lot of beliefs that weren’t quite fully formulated, that didn’t necessarily connect to each other or fit together in a clear way, or that were adopted on an ad hoc basis. (Kevin Simler and Robin Hanson illustrate this by pointing out that “when people are asked the same policy question a few months apart, they frequently give different answers – not because they’ve changed their minds, but because they’re making up answers on the spot, without remembering what they said last time. It is even easy to trick voters into explaining why they favor a policy, when in fact they recently said they opposed that policy.”) One thing is for sure – hardly anyone who categorizes themselves under some ideological label like “Democrat” or “Republican” or “Christian” or “Muslim” would have come up with that exact bundle of political or religious beliefs if they’d had to formulate everything themselves from scratch.

But that’s the thing; most people never go to the trouble of building their own personal worldview, on an issue by issue basis, from the ground up. They don’t feel like they have to – because they can simply select one of the pre-constructed ideologies that already exist (Christianity, Liberalism, etc.) and adopt that ideology as their own. Why consider each individual issue on its own merits and spend all that time outlining your own ideological framework, when there are pre-packaged worldviews available that have already done the work of answering every question for you? You can just peg your own worldview to one of those and say, “There, whatever that ideology says is what I believe.”

This strikes me as weird, frankly. You’d think that in a country of 325 million people, there’d be 325 million different worldviews. Given the thousands of different topics it’s possible to have an opinion on, you’d think there would be an incredibly wide range of variation, with some people believing in gun control but not creationism or financial regulation or climate change, others believing in creationism and climate change but not gun control or financial regulation, still others believing in financial regulation and climate change but not gun control or creationism, and so on. But instead, what we see is this pervasive cultural mindset that there are only a few possible combinations of views that a person can reasonably hold – only two, really, in most contexts. In the political sphere, you’re either a Republican or a Democrat. In the religious sphere, you either belong to whatever religion you were born into or you aren’t religious at all. There are only considered to be two possible ideologies for a person to sort themselves into – only two possible answers to every problem – and nothing else even computes. As Orson Scott Card writes:

We are fully polarized – if you accept one idea that sounds like it belongs to either the blue or the red, you are assumed – nay, required – to espouse the entire rest of the package, even though there is no reason why supporting the war against terrorism should imply you’re in favor of banning all abortions and against restricting the availability of firearms; no reason why being in favor of keeping government-imposed limits on the free market should imply you also are in favor of giving legal status to homosexual couples and against building nuclear reactors. These issues are not remotely related, and yet if you hold any of one group’s views, you are hated by the other group as if you believed them all; and if you hold most of one group’s views, but not all, you are treated as if you were a traitor for deviating even slightly from the party line.

(Quick disclaimer, by the way: You’ll notice me using a lot of quotations in this post. You’ll also notice that some of them come from highly reputable sources, while others… not so much. But I like to take good ideas wherever I find them, and a lot of great stuff has been written on this topic (even from writers and commentators you might not expect), so I’d rather share those insights with you directly than clumsily try to rephrase them into my own words when it’s not necessary – even if the result is that this post ends up being more of a compilation of interesting ideas I’ve gathered from other people than just a product of my own thoughts.)

Why is it so black and white? Why do worldviews tend to cluster together into only two flavors like this? Why is it that, if I told you someone’s opinion on climate change, you could probably predict their views on gun control, gay marriage, and so on, even though these topics are completely unrelated? Well, one possible answer is that although the topics might not be innately linked, they can become strongly linked due to how frequently they’re associated with each other. As David McRaney writes:

Symbols are a big part of your life thanks to the associative architecture of your brain. When I write a terrible romance novel line such as “It should have been obvious she was born in Africa, she had a beautifully long, slender neck not unlike a…” you can finish my sentence because your brain long ago formed a connection to the words long, slender, neck, and Africa. Neuroscientists call this a semantic net – every word, image, idea, and feeling is associated with everything else, like an endless tree growing in every direction at once. When you smell popcorn, you think of the cinema. When you hear a Christmas song, you think of Christmas trees.

Scott Alexander elaborates on how this idea could apply to socio-political issues:

Little flow diagram things make everything better. Let’s make a little flow diagram thing.

[Say] we have our node “Israel”, which has either good or bad karma [depending on your political opinions]. Then there’s another node close by marked “Palestine”. We would expect these two nodes to be pretty anti-correlated. When Israel has strong good karma, Palestine has strong bad karma, and vice versa.

Now suppose you listen to Noam Chomsky talk about how strongly he supports the Palestinian cause and how much he dislikes Israel. One of two things can happen:

“Wow, a great man such as Noam Chomsky supports the Palestinians! They must be very deserving of support indeed!”

or

“That idiot Chomsky supports Palestine? Well, screw him. And screw them!”

So now there is a third node, Noam Chomsky, that connects to both Israel and Palestine, and we have discovered it is positively correlated with Palestine and negatively correlated with Israel. It probably has a pretty low weight, because there are a lot of reasons to care about Israel and Palestine other than Chomsky, and a lot of reasons to care about Chomsky other than Israel and Palestine, but the connection is there.

I don’t know anything about neural nets, so maybe this system isn’t actually a neural net, but whatever it is I’m thinking of, it’s a structure where eventually the three nodes reach some kind of equilibrium. If we start with someone liking Israel and Chomsky, but not Palestine, then either that’s going to shift a little bit towards liking Palestine, or shift a little bit towards disliking Chomsky.

Now we add more nodes. Cuba seems to really support Palestine, so they get a positive connection with a little bit of weight there. And I think Noam Chomsky supports Cuba, so we’ll add a connection there as well. Cuba is socialist, and that’s one of the most salient facts about it, so there’s a heavily weighted positive connection between Cuba and socialism. Palestine kind of makes noises about socialism but I don’t think they have any particular economic policy, so let’s say very weak direct connection. And Che is heavily associated with Cuba, so you get a pretty big Che – Cuba connection, plus a strong direct Che – socialism one. And those pro-Palestinian students who threw rotten fruit at an Israeli speaker also get a little path connecting them to “Palestine” – hey, why not – so that if you support Palestine you might be willing to excuse what they did and if you oppose them you might be a little less likely to support Palestine.

Back up. This model produces crazy results, like that people who like Che are more likely to oppose Israel bombing Gaza. That’s such a weird, implausible connection that it casts doubt upon the entire…

Oh. Wait. Yeah. Okay.

I think this kind of model, in its efforts to sort itself out into a ground state, might settle on some kind of General Factor Of Politics, which would probably correspond pretty well to the left-right axis.

As he sums up, the “theory is that everything in politics is mutually reinforcing.” So if, say, you start off only having one or two main issues that you really care about (like abortion or gun control), you’ll naturally be more receptive toward sources that agree with you on those issues, and will end up absorbing a lot of their opinions on other issues as well (like taxes, foreign policy, gay marriage, etc.) through osmosis. Acquiring one particular belief or set of beliefs will usually mean that certain others come along for the ride. And because everyone else – including the very sources you’re taking your cues from – is doing the same thing, associations between different ideas that might start off fairly weak can accumulate more weight as like-minded people influence and are influenced by each other in turn. People within the same web of influence can reinforce each other’s conceptual connections so much that it causes a consensus cluster of beliefs to form, and the result is that anyone who adopts one of the beliefs within a particular cluster will typically be much more likely to adopt the entire rest of the cluster as well, since every source they encounter that opposes abortion and gun control will also tend to oppose higher taxes and gay marriage, and vice-versa. Ultimately, just by listening to “the people who know what they’re talking about” – i.e. the people who agree with you on your core issues – you can easily end up pigeonholing yourself neatly into the same standard left-right dichotomy as everyone else without even meaning to.

In a way, this relates back to Parkinson’s example of the nuclear reactor, in which everyone quickly comes to an agreement because they all assume someone more qualified has already done the necessary research to justify it. If you encounter a new ideological issue that you haven’t considered very deeply for yourself just yet, or haven’t done much research on, but you still want to have an opinion and appear knowledgeable (and who doesn’t want to appear knowledgeable?), it’s easier to simply slot yourself into whichever one of the pre-constructed ideologies that already exist (and have already figured out their answer to each of the hundreds of issues in question) you more closely identify with, and then adopt that faction’s view as your own, than it is to start from scratch and try to build your own conclusions from the ground up – doing your homework on each of the issues yourself, one by one, and cobbling together your own worldview that may or may not fit perfectly into the black-and-white paradigm used by everyone else. Instead of becoming knowledgeable the hard way, you can be knowledgeable by proxy, claiming your side’s collective expertise as your own. Instead of using the knowledge of trusted commentators to help you form your beliefs and fill in gaps in your worldview (a good and reasonable thing to do), you can use it as a means to skip that step entirely and pick a worldview that’s already been formed.

Like a student taking a pop quiz, it’s easier to answer the question as if it were multiple-choice than to answer it in a free-response essay format. With a multiple-choice question, you don’t have to show your work or explain your reasoning; all you have to do is select the right answer, and your competence is demonstrated. In terms of ideology, likewise, it’s easier simply to self-identify as a Republican or a Democrat or a Christian or a Muslim or whatever than to explain to anyone who asks that although you agree with one faction on issues A, B, and C, you like what this other faction has to say about issues D, E, and F, and on certain issues like G, H, and I you don’t really identify with any of the traditional factions and have formed your own particular views. The latter approach is still perfectly possible, obviously – there are a lot of smart people with independently-constructed worldviews that don’t fit neatly into the traditional dichotomies – but the former approach is undoubtedly the more convenient one.

This kind of ideological shortcut-taking – where everyone lumps themselves (and each other) into one of two groups rather than operating on a piecemeal basis – isn’t always a bad thing. Jason Brennan grants, for instance, that it can be useful for quickly identifying which political candidates’ views are closest to your own:

In modern democracies, most candidates join political parties. Political parties run on general ideologies and policy platforms. Individual candidates may have their own idiosyncrasies and preferences, but they have a strong tendency to fall in line and do what the party wants.

Many political scientists think party systems reduce the epistemic burdens of voting. Voters can get by reasonably well by treating all Republicans and Democrats as two homogeneous groups. In an election, instead of learning what this particular Republican and Democratic want to do, I can treat the candidates as standard Republicans and Democrats, and vote accordingly. This kind of statistical discrimination leads to mistakes on an individual basis, but on a macro level, with 535 members of Congress, these individual mistakes are likely to cancel out. The party system thus provides voters with a “cognitive shortcut”; it allows them to act as if they were reasonably well informed.

There’s much to be said for this line of reasoning. So long as voters tend to have reasonably accurate stereotypes of what policies the two major political parties tend to prefer, then voters as a whole can perform well by relying on such stereotypes.

On a more basic level, there’s also the simple fact that being part of a group is generally more fun and socially rewarding than being an unaffiliated loner without any allies or allegiances. There is power in numbers, and it can just feel so satisfying to belong to a group of like-minded people who will agree with you on all the important issues and praise you for agreeing with them in turn. People love being part of a team; so whenever there’s a contentious issue floating around, they will always want to take a side, even if there are only two sides to choose from. For a lot of people, this is probably the biggest reason of all to want to slot themselves into pre-constructed ideologies rather than constructing their own – because the ideas themselves aren’t actually their main consideration.

The problem with all this, though, is that once you take a side and plant your flag firmly in that camp, you develop a sort of “brand loyalty,” and your chosen affiliation starts to affect what you believe, instead of the other way around. As Carol Tavris and Eliot Aronson put it:

Once people form [an ideological] identity, usually in young adulthood, the identity does their thinking for them. That is, most people do not choose [an ideology] because it reflects their views; rather, once they choose [an ideology], its policies [or doctrines] become their views.

I could give countless examples of this. I remember hearing one case, for instance, of a Jewish man who casually mentioned one day that he didn’t believe in Hell anymore, and when asked why, replied that he found out that Judaism doesn’t include a belief in Hell, so since he was Jewish, then that wasn’t a part of what he believed anymore. Balioc gives a similar common example of “a self-identified libertarian asking ‘can a libertarian believe X?’ rather than just figuring out whether X is a reasonable thing to believe.” Steven Pinker has mentioned his encounters with people who held very strong opinions either for or against the Trans-Pacific Partnership (a trade agreement from 2016), but when asked to give their reasoning, couldn’t actually explain what the TPP even was – they just knew that their social media feeds were adamant about supporting it or opposing it, and so they jumped on the bandwagon as well. And Tavris and Aronson themselves give some even more dramatic political examples:

[Social psychologist Lee Ross has gained valuable insights] from his laboratory experiments and from his efforts to reduce the bitter conflict between Israelis and Palestinians. Even when each side recognizes that the other side perceives the issues differently, each thinks that the other side is biased while they themselves are objective, and that their own perceptions of reality should provide the basis for settlement. In one experiment, Ross took peace proposals created by Israeli negotiators, labeled them as Palestinian proposals, and asked Israeli citizens to judge them. “The Israelis liked the Palestinian proposal attributed to Israel more than they liked the Israeli proposal attributed to the Palestinians,” he says. “If your own proposal isn’t going to be attractive to you when it comes from the other side, what chance is there that the other side’s proposal is going to be attractive when it actually comes from the other side?”

Closer to home, social psychologist Geoffrey Cohen found that Democrats will endorse an extremely restrictive welfare proposal, one usually associated with Republicans, if they think it has been proposed by the Democratic Party, and Republicans will support a generous welfare policy if they think it comes from the Republican Party. Label the same proposal as coming from the other side, and you might as well be asking people if they will favor a policy proposed by Osama bin Laden. No one in Cohen’s study was aware of their blind spot – that they were being influenced by their party’s position. Instead, they all claimed that their beliefs followed logically from their own careful study of the policy at hand, guided by their general philosophy of government.

This is the central flaw of the whole “taking sides” approach. Once you’ve decided that the question “Which ideology is right?” must be a multiple-choice question and not a free-response one, it means that the moment you pick an answer to that question and take a side, your journey of inquiry is done. You no longer feel the need to examine any new facts for yourself, because you already know everything you need to know – you’ve already determined which side is the one that’s correct about everything – so you can reject any contradictory arguments in advance without even hearing them. If any new fact does happen to creep into your awareness which contradicts what your side says is true, well then, all that means is that you have to rationalize the contradiction away somehow – either by coming up with some explanation for why the new fact is actually false or misleading, or by concocting some elaborate interpretation of your side’s ideology that demonstrates how your side has actually been properly accounting for this fact all along, or by some other equally convoluted method. You may not know exactly what the explanation is for why your side is right on some particular given issue, but you know that in the overall scheme of things, your side is the right one, so therefore some explanation must exist, even if you don’t know the particulars off the top of your head. What you can’t do, though, is just admit that in some cases the facts do appear to point in a direction other than your chosen side – much less admit that you have no problem accepting this because you have your own personal set of beliefs that don’t always map perfectly onto one side in the first place. Once you’ve picked a side, it becomes your answer to every problem. It becomes part of your identity – almost like your nationality or something – and it therefore has to be protected from any new information that might threaten it.

II.

A lot of smart people have noticed what a powerful (and detrimental) role this particular brand of mental gymnastics can have in our ideological interactions – from Bill Clinton

The problem with any ideology is that it gives the answer before you look at the evidence. So you have to mold the evidence to get the answer that you’ve already decided you’ve got to have.

…to Arthur Conan Doyle, via his character Sherlock Holmes:

It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories instead of theories to suit facts.

Even George Orwell (perhaps unsurprisingly) weighed in on this phenomenon in his novel 1984, describing how people rid themselves of any thoughts or ideas that contradict their party’s ideology:

Crimestop means the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments […] and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.

To put it simply, then, there are basically two different approaches you might take when encountering new information. You can either examine all the facts dispassionately and then accept whichever truth they point to – even if it contradicts your preferred conclusion – or you can accept only those facts that are compatible with what you already believe, and rationalize the rest away so that you can maintain your already-held beliefs without having to change any of them. In the latter case, you aren’t actually undergoing an honest search for truth – you’re just searching for one very particular truth. You already have in mind the conclusion that you’re aiming for, and you’re determined to arrive at that conclusion even if it means ignoring or dismissing unwelcome facts.

The official term for this is “motivated reasoning.” Jonathan Haidt delves into some of the theory behind this behavior:

The social psychologist Tom Gilovich studies the cognitive mechanisms of strange beliefs. His simple formulation is that when we want to believe something, we ask ourselves, “Can I believe it?” Then (as [Deanna] Kuhn and [David] Perkins found), we search for supporting evidence, and if we find even a single piece of pseudo-evidence, we can stop thinking. We now have permission to believe. We have a justification, in case anyone asks.

In contrast, when we don’t want to believe something, we ask ourselves, “Must I believe it?” Then we search for contrary evidence, and if we find a single reason to doubt the claim, we can dismiss it.

You only need one key to unlock the handcuffs of must.

Psychologists now have file cabinets full of findings on “motivated reasoning,” showing the many tricks people use to reach the conclusions they want to reach. When subjects are told that an intelligence test gave them a low score, they choose to read articles criticizing (rather than supporting) the validity of IQ tests. When people read a (fictitious) scientific study that reports a link between caffeine consumption and breast cancer, women who are heavy coffee drinkers find more flaws in the study than do men and less caffeinated women. Pete Ditto, at the University of California at Irvine, asked subjects to lick a strip of paper to determine whether they have a serious enzyme deficiency. He found that people wait longer for the paper to change color (which it never does) when a color change is desirable than when it indicates a deficiency, and those who get the undesirable prognosis find more reasons why the test might not be accurate (for example, “My mouth was unusually dry today”).

The difference between a mind asking “Must I believe it?” versus “Can I believe it?” is so profound that it even influences visual perception. Subjects who thought that they’d get something good if a computer flashed up a letter rather than a number were more likely to see the ambiguous figure [below] as the letter B, rather than as the number 13.

If people can literally see what they want to see – given a bit of ambiguity – is it any wonder that scientific studies often fail to persuade the general public? Scientists are really good at finding flaws in studies that contradict their own views, but it sometimes happens that evidence accumulates across many studies to the point where scientists must change their minds. I’ve seen this happen in my colleagues (and myself) many times, and it’s part of the accountability system of science – you’d look foolish clinging to discredited theories. But for nonscientists, there is no such thing as a study you must believe. It’s always possible to question the methods, find an alternative interpretation of the data, or, if all else fails, question the honesty or ideology of the researchers.

And now that we all have access to search engines on our cell phones, we can call up a team of supportive scientists for almost any conclusion twenty-four hours a day. Whatever you want to believe about the causes of global warming or whether a fetus can feel pain, just Google your belief. You’ll find partisan websites summarizing and sometimes distorting relevant scientific studies. Science is a smorgasbord, and Google will guide you to the study that’s right for you.

[…]

People are quite good at challenging statements made by other people, but if it’s your belief, then it’s your possession – your child, almost – and you want to protect it, not challenge it and risk losing it.

Chris Mooney adds:

A large number of psychological studies have shown that people respond to scientific or technical evidence in ways that justify their preexisting beliefs. In a classic 1979 experiment (PDF), pro- and anti-death penalty advocates were exposed to descriptions of two fake scientific studies: one supporting and one undermining the notion that capital punishment deters violent crime and, in particular, murder. They were also shown detailed methodological critiques of the fake studies – and in a scientific sense, neither study was stronger than the other. Yet in each case, advocates more heavily criticized the study whose conclusions disagreed with their own, while describing the study that was more ideologically congenial as more “convincing.”

Since then, similar results have been found for how people respond to “evidence” about affirmative action, gun control, the accuracy of gay stereotypes, and much else. Even when study subjects are explicitly instructed to be unbiased and even-handed about the evidence, they often fail.

[…]

[In ideologically loaded cases such as these,] when we think we’re reasoning, we may instead be rationalizing. Or to use an analogy offered by University of Virginia psychologist Jonathan Haidt: We may think we’re being scientists, but we’re actually being lawyers (PDF). Our “reasoning” is a means to a predetermined end – winning our “case” – and is shot through with biases. They include “confirmation bias,” in which we give greater heed to evidence and arguments that bolster our beliefs, and “disconfirmation bias,” in which we expend disproportionate energy trying to debunk or refute views and arguments that we find uncongenial.

That’s a lot of jargon, but we all understand these mechanisms when it comes to interpersonal relationships. If I don’t want to believe that my spouse is being unfaithful, or that my child is a bully, I can go to great lengths to explain away behavior that seems obvious to everybody else – everybody who isn’t too emotionally invested to accept it, anyway. That’s not to suggest that we aren’t also motivated to perceive the world accurately – we are. Or that we never change our minds – we do. It’s just that we have other important goals besides accuracy – including identity affirmation and protecting one’s sense of self – and often those make us highly resistant to changing our beliefs when the facts say we should.

As Haidt points out (citing the research of Philip Tetlock), it actually is possible to overcome these biases under certain specific circumstances, in which you expect to be held accountable for the accuracy of your beliefs in a very particular way – but as he also mentions, these circumstances hardly ever arise in the real world:

Tetlock found two very different kinds of careful reasoning. Exploratory thought is an “evenhanded consideration of alternative points of view.” Confirmatory thought is “a one-sided attempt to rationalize a particular point of view.” Accountability increases exploratory thought only when three conditions apply: (1) decision makers learn before forming any opinion that they will be accountable to an audience, (2) the audience’s views are unknown, and (3) they believe the audience is well informed and interested in accuracy.

When all three conditions apply, people do their darnedest to figure out the truth, because that’s what the audience wants to hear. But the rest of the time – which is almost all of the time – accountability pressures simply increase confirmatory thought. People are trying harder to look right than to be right.

It’s also worth noting that the greater your commitment is to a certain belief (and in high-stakes realms like politics, religion, and morality, people’s level of commitment tends to be extreme), the harder it is to give up your attachment to it. The more central an idea is to your thinking, the less likely it is that you’ll be willing to consider replacing it, even when the evidence against it becomes overwhelming. It may be no big deal to occasionally make slight adjustments to one or two minor beliefs on the fringes of your worldview, but if a belief underpins the entire foundation of your understanding, it’s simply unthinkable to ever change it – because that would mean tearing down the whole superstructure (perhaps one that you’ve spent years constructing) and starting all over again from ground zero. As Tavris and Aronson write:

America is a mistake-phobic culture, one that links mistakes with incompetence and stupidity. So even when people are aware of having made a mistake, they are often reluctant to admit it, even to themselves, because they take it as evidence that they are a blithering idiot.

[…]

Most Americans know they are supposed to say “we learn from our mistakes,” but deep down, they don’t believe it for a minute. They think that mistakes mean you are stupid.

[…]

One lamentable consequence of the belief that mistakes equal stupidity is that when people do make a mistake, they don’t learn from it. They throw good money after bad.

They provide a particularly striking historical example:

Half a century ago, a young social psychologist named Leon Festinger and two associates infiltrated a group of people who believed the world would end on December 21. They wanted to know what would happen to the group when (they hoped!) the prophecy failed. The group’s leader, whom the researchers called Marian Keech, promised that the faithful would be picked up by a flying saucer and elevated to safety at midnight on December 20. Many of her followers quit their jobs, gave away their homes, and dispersed their savings, waiting for the end. Who needs money in outer space? Others waited in fear or resignation in their homes. (Mrs. Keech’s own husband, a nonbeliever, went to bed early and slept soundly through the night as his wife and her followers prayed in the living room.) Festinger made his own prediction: The believers who had not made a strong commitment to the prophecy – who awaited the end of the world by themselves at home, hoping they weren’t going to die at midnight – would quietly lose their faith in Mrs. Keech. But those who had given away their possessions and were waiting with the others for the spaceship would increase their belief in her mystical abilities. In fact, they would now do everything they could to get others to join them.

At midnight, with no sign of a spaceship in the yard, the group felt a little nervous. By 2 a.m., they were getting seriously worried. At 4:45 a.m., Mrs. Keech had a new vision: The world had been spared, she said, because of the impressive faith of her little band. “And mighty is the word of God,” she told her followers, “and by his word have ye been saved – for from the mouth of death have ye been delivered and at no time has there been such a force loosed upon the Earth. Not since the beginning of time upon this Earth has there been such a force of Good and light as now floods this room.”

The group’s mood shifted from despair to exhilaration. Many of the group’s members, who had not felt the need to proselytize before December 21, began calling the press to report the miracle, and soon they were out on the streets, buttonholing passersby, trying to convert them. Mrs. Keech’s prediction had failed, but not Leon Festinger’s.

You might be tempted to laugh at the gullibility of these seemingly ridiculous cultists, but we all share the same psychological propensity to stack the deck in favor of our preferred conclusions when we’re heavily invested in them; even the most intelligent and highly-trained professionals in their fields are susceptible to this. In fact, even when the stakes are as high as they could possibly be, even when the lives of millions of people hang in the balance, it’s possible to fall prey to this tendency to “put our thumbs on the scale as we weigh the evidence,” as Simler puts it – to take the principle of “the benefit of the doubt” to its logical extreme in favor of our preferred conclusions. Gary Klein cites the example of the pivotal World War II battle of Midway:

[The Japanese] had reason for their overconfidence. They had smashed the Americans and British throughout the Pacific – at Pearl Harbor, in the Philippines, and in Southeast Asia. Now they prepared to use an attack on Midway Island to wipe out the remaining few aircraft carriers the Americans had left in the Pacific. The Japanese brought their primary strike force, the force that had struck at Pearl Harbor, with four of their finest aircraft carriers.

The battle didn’t go as planned. The Americans had broken enough of the Japanese code to know about the Japanese attack and got their own aircraft carriers into position before the Japanese arrived. The ambush worked. In a five-minute period, the United States sank three of the Japanese aircraft carriers. By the end of the day, it sank the fourth one as well. At the beginning of June 4, the Japanese Navy ruled the Pacific. By that evening, its domination was over, and Japan spent the rest of the war defending against an inevitable defeat.

What interests us here is the war game the Japanese conducted May 1–5 to prepare for Midway. The top naval leaders gathered to play out the plan and see if there were any weaknesses. Yamamoto himself was present. However, the Japanese brushed aside any hint of a weakness. At one point the officer playing the role of the American commander sent his make-believe forces to Midway ahead of the battle to spring an ambush not unlike what actually happened. The admiral refereeing the war game refused to allow it. He argued that it was very unlikely that the Americans would make such an aggressive move. The officer playing the American commander tearfully protested, not only because he wanted to do well in the war game, but also, and more importantly, because he was afraid his colleagues were taking the Americans too lightly. His protests were overruled. With Yamamoto looking on approvingly, the Japanese played the war game to the end and concluded that the plan didn’t have any flaws.

How could the Japanese leaders have been so willfully blind to the facts that were staring them right in the face, especially when the stakes were so high? The simple answer is that, precisely because the stakes were so high, the Japanese leaders became too psychologically committed to the stances they’d already taken – to the point that it was less psychologically painful to continue being wrong, and to just have everyone think they were competent (including themselves), than it was to be right by admitting their error and going back to the drawing board. If the stakes had been significantly lower – for instance, if they’d just been playing a casual game of Battleship over drinks with friends – it would have been no big deal to accept some good outside advice on improving their strategy. But because they’d put so much work into their battle plans, and because they were staking so much of their prestige as military commanders on the success of these plans, it was unthinkable to just scrap them and start all over from square one. Tavris and Aronson give another World War II-related analogy illustrating this mindset:

In that splendid film The Bridge on the River Kwai, Alec Guinness and his soldiers, prisoners of the Japanese in World War II, build a railway bridge that will aid the enemy’s war effort. Guinness agrees to this demand by his captors as a way of building unity and restoring morale among his men, but once he builds it, it becomes his – a source of pride and satisfaction. When, at the end of the film, Guinness finds the wires revealing that the bridge has been mined and realizes that Allied commandoes are planning to blow it up, his first reaction is, in effect: “You can’t! It’s my bridge. How dare you destroy it!” To the horror of the watching commandoes, he tries to cut the wires to protect the bridge. Only at the very last moment does Guinness cry, “What have I done?,” realizing that he was about to sabotage his own side’s goal of victory to preserve his magnificent creation.

We’ve all had moments like this, where we become so emotionally attached to our own ideas – so invested in winning the argument and preserving our beliefs – that we lose sight of our more important goal, which should be making sure we’re on the right side of the argument and holding the right beliefs in the first place. As Sister Y puts it:

A lot of us get stuck in traps. We become aware of a powerful insight (atheism, feminism, conspiracy theories) and begin to think it explains all of reality. We commit to our hard-won but limited set of insights until they calcify, protecting us from the trauma (and the pleasure) of further insights.

Sure, we can admit when we’re wrong about small, trivial things – no problem – but when it comes to the big things, we don’t want to let go of anything we’ve worked so hard on and made such an integral part of our identity. Not only is it deeply demoralizing to have to go back to square one; it’s embarrassing. Admitting you were wrong about something means losing face in a big way – especially if it’s something you were really vocal and adamant about previously.

Maybe if we had a different approach toward ideas and beliefs – one in which people didn’t have to feel so self-conscious about being wrong and could freely explore different possibilities in a genuinely open-ended search for truth – we wouldn’t keep running into these problems. But the whole “taking sides” dynamic doesn’t permit such an approach. Not only does it compel you to do all these mental gymnastics massaging the facts to fit into your side’s narrative; it also forces you into a mindset that is, by definition, inherently adversarial and hostile toward any outside challenges. You can’t have a “side,” after all, unless there’s an opposing side – and this means that beliefs and ideas aren’t just a matter of freely exploring different possibilities in an unselfconscious search for truth, they’re a matter of winning and losing. If you happen to be wrong about something, that doesn’t just mean you can correct course and continue onward, feeling grateful for the opportunity to have upgraded to a more accurate set of beliefs; it means you’ve embarrassed yourself and lost face.

There’s a good bit of psychological research to suggest that when you openly challenge a person’s beliefs in a direct confrontation, it doesn’t necessarily make them more open to opposing ideas (as it might if they learned about those ideas in a less adversarial context); on the contrary, what can often happen is that direct confrontation causes them to become more defensive, digging in their heels and entrenching themselves even further in their positions. Again, this is probably something you’ve experienced for yourself – what starts as a friendly disagreement (Person A: “I don’t think that’s really true”) gradually begins to escalate (Person B: “Really? It seems pretty clear to me that it is true”); positions begin to harden (Person A: “Are you kidding? It’s obviously not true; you’d have to be a moron not to see that”), until finally you reach a point where both participants, having entered the conversation with a fairly modest level of confidence in their views, are now swearing that their side is right with absolute 100% certainty. This isn’t so much that they really are 100% certain in their views (if you asked them to bet their life savings on it, for instance, they’d probably start backpedaling pretty quickly unless they’d gotten themselves so steamed up that they were beyond all reason); it’s more that their avowed “certainty” is serving as a proxy for something else, like how important the belief is to them, or how much they want to be perceived as being committed to that belief. A lot of times, it’s just a way of making their argument seem more credible; if someone really is that confident in their beliefs, then those beliefs must be true, right?

Obviously, we know that that’s not the case. There are millions of Christians who will swear with 100% certainty that Jesus has visited them personally and revealed to them that Christianity is the one and only true religion; and there are millions of Muslims and Hindus and other believers who will swear the same thing with 100% certainty about their own deities; but they can’t all be right. Similarly, there are millions of liberals who claim to be 100% certain that their preferred policies are superior, while millions of conservatives claim to be 100% certain that their preferred policies are superior. Again, just because someone claims a high degree of certainty doesn’t prove that their ideas are more credible; all it proves is that people are capable of convincing themselves that they’re certain of things they don’t actually have any way of being certain of. (As Michel de Montaigne said, “Nothing is so firmly believed as that which we least know.”) Nobody wants to admit this out loud, though – especially when they’re talking to someone outside of their religion or political party – because they feel that admitting to anything less than absolute certainty in their beliefs would undermine the perceived credibility of those beliefs. If you’re trying to win an argument, the reasoning goes, then hedging your positions and conceding that there are a lot of unknown grey areas defeats the purpose. You need to know for a fact that your side is right; anything less is self-sabotage.

III.

According to this mentality, admitting when someone on the opposing side makes a decent point (i.e. “making a concession”) is like giving up points in a game – it’s like losing – and therefore you can never do it willingly. It doesn’t matter what’s actually true; what matters is that you deny your opponents the satisfaction of having scored points against you. This isn’t always something that you even need to be consciously aware of – i.e. realizing when your opponent is making a good point and just cynically refusing to admit it for strategic purposes – a lot of times it can take a more subconscious form, where you don’t allow yourself to even notice when they’re making a good point. Like in that Orwell quotation describing Crimestop, it’s not so much that you consciously think “Hmm, yes, I notice that my opponent’s argument is superior to my own, therefore I won’t acknowledge it” – it’s more of a subtle feeling of frustration that your opponent’s argument isn’t instantly crumbling under the force of your effortless refutations like it should, accompanied by a nagging sense of annoyance at not knowing exactly how to make it so. Again, the more you perceive the issue at hand to be central to your side’s ideology, the stronger this behavior is.

And this tendency to never want to cede any ground to your enemy doesn’t just skew the way you handle opposing ideas in an argument; it can also affect your perception of every single person and event that might have some connection to your (or your opponent’s) ideology. If you see a news story about a prominent figure using some horrible derogatory slur, for instance, or a crazed criminal attacking people in the streets, your response will probably be roughly the same as everyone else’s if the perpetrator is ideologically neutral. However, the moment the story takes on an ideological shade – if new information emerges that the wrongdoer subscribes to a particular religious or political belief system, for example – then you’ll immediately be tempted to start rationalizing in one direction or another. If they’re part of the same team you are, then you’ll start thinking up excuses for why what they said wasn’t actually that bad, or why their actions weren’t motivated by their beliefs but by some other factor like mental illness. On the other hand, if they’re part of the opposing team, then you’ll reject any such excuses out of hand and be more inclined to exaggerate the severity of the wrongdoing, claiming that it’s the most reprehensible atrocity you’ve ever seen and reflects the moral bankruptcy of the ideology behind it.

Alexander provides some insight on how ideology can skew our perception of such events:

I have found a pattern: when people consider an idea in isolation, they tend to make good decisions. When they consider an idea a symbol of a vast overarching narrative, they tend to make very bad decisions.

Let me offer [an] example.

A white man is accused of a violent attack on a black woman. In isolation, well, either he did it or he didn’t, and without any more facts there’s no use discussing it.

But what if this accusation is viewed as a symbol? What if you have been saying for years that racism and sexism are endemic in this country, and that whites and males are constantly abusing blacks and females, and they’re always getting away with it because the police are part of a good ole’ boys network who protect their fellow privileged whites?

Well, right now, you’ll probably still ask for the evidence. But if I gave you some evidence, and it was complicated, you’d probably interpret it in favor of the white man’s guilt. The heart has its reasons that reasons know not of, and most of them suck. We make unconsciously make decisions based on our own self-interest and what makes us angry or happy, and then later we find reasons why the evidence supports them. If I have a strong interest in a narrative of racism, then I will interpret the evidence to support accusations of racism.

Lest I sound like I’m picking on the politically correct, I’ve seen scores of people with the opposite narrative. You know, political correctness has grown rampant in our society, women and minorities have been elevated to a status where they can do no wrong, the liberal intelligentsia always tries to pin everything on the white male. When the person with this narrative hears the evidence in this case, they may be more likely to believe the white man – especially if they’d just listened to their aforementioned counterpart give their speech about how this proves the racist and sexist tendencies of white men.

Yes, I’m thinking of the Duke lacrosse case.

The problem here is that there are two different questions here: whether this particular white male attacked this particular black woman, and whether our society is racist or “reverse racist”. The first question definitely has one correct answer which while difficult to ascertain is philosophically simple, whereas the second question is meaningless, in the same technical sense that “Islam is a religion of peace” is meaningless. People are conflating these two questions, and acting as if the answer to the second determines the answer to the first.

Which is all nice and well unless you’re one of the people involved in the case, in which case you really don’t care about which races are or are not privileged in our society as much as you care about not being thrown in jail for a crime you didn’t commit, or about having your attacker brought to justice.

I think this is the driving force behind a lot of politics. Let’s say we are considering a law mandating businesses to lower their pollution levels. So far as I understand economics, the best decision-making strategy is to estimate how much pollution is costing the population, how much cutting pollution would cost business, and if there’s a net profit, pass the law. Of course it’s more complicated, but this seems like a reasonable start.

What actually happens? One side hears the word “pollution” and starts thinking of hundreds of times when beautiful pristine forests were cut down in the name of corporate greed. This links into other narratives about corporate greed, like how corporations are oppressing their workers in sweatshops in third world countries, and since corporate executives are usually white and third world workers usually not, let’s add racism into the mix. So this turns into one particular battle in the war between All That Is Right And Good and Corporate Greed That Destroys Rainforests And Oppresses Workers And Is Probably Racist.

The other side hears the words “law mandating businesses” and starts thinking of a long history of governments choking off profitable industry to satisfy the needs of the moment and their re-election campaign. The demonization of private industry and subsequent attempt to turn to the government for relief is a hallmark of communism, which despite the liberal intelligentsia’s love of it killed sixty million people. Now this is a battle in the war between All That Is Right And Good and an unholy combination of Naive Populism and Soviet Russia.

[…]

Now, if the economists do their calculations and report that actually the law would cause more harm than good, do you think the warriors against Corporate Greed That Destroys Rainforests And Oppresses Workers And Is Probably Racist are going to say “Oh, okay then” and stand down? In the face of Corporate Greed That Destroys Rainforests And Oppresses Workers And Is Probably Racist?!?

[…]

I call this mistake “missing the trees for the forest”. If you have a specific case you need to judge, judge it separately on its own merits, not the merits of what agendas it promotes or how it fits with emotionally charged narratives.

He gives one more example:

You can see that after the Ferguson shooting [of 2014], the average American became a little less likely to believe that blacks were treated equally in the criminal justice system. This makes sense, since the Ferguson shooting was a much-publicized example of the criminal justice system treating a black person unfairly.

But when you break the results down by race, a different picture emerges. White people were actually a little more likely to believe the justice system was fair after the shooting. Why? I mean, if there was no change, you could chalk it up to white people believing the police’s story that the officer involved felt threatened and made a split-second bad decision that had nothing to do with race. That could explain no change just fine. But being more convinced that justice is color-blind? What could explain that?

My guess – before Ferguson, at least a few people interpreted this as an honest question about race and justice. After Ferguson, everyone mutually agreed it was about politics.

[…]

Anyone who thought that the question in that poll was just a simple honest question about criminal justice was very quickly disabused of that notion. It was a giant Referendum On Everything, a “do you think the Blue Tribe is right on every issue and the Red Tribe is terrible and stupid, or vice versa?” And it turns out many people who when asked about criminal justice will just give the obvious answer, have much stronger and less predictable feelings about Giant Referenda On Everything.

In my last post, I wrote about how people feel when their in-group is threatened, even when it’s threatened with an apparently innocuous point they totally agree with:

I imagine [it] might feel like some liberal US Muslim leader, when he goes on the O’Reilly Show, and O’Reilly ambushes him and demands to know why he and other American Muslims haven’t condemned beheadings by ISIS more, demands that he criticize them right there on live TV. And you can see the wheels in the Muslim leader’s head turning, thinking something like “Okay, obviously beheadings are terrible and I hate them as much as anyone. But you don’t care even the slightest bit about the victims of beheadings. You’re just looking for a way to score points against me so you can embarass all Muslims. And I would rather personally behead every single person in the world than give a smug bigot like you a single microgram more stupid self-satisfaction than you’ve already got.”

I think most people, when they think about it, probably believe that the US criminal justice system is biased. But when you feel under attack by people whom you suspect have dishonest intentions of twisting your words so they can use them to dehumanize your in-group, eventually you think “I would rather personally launch unjust prosecutions against every single minority in the world than give a smug out-group member like you a single microgram more stupid self-satisfaction than you’ve already got.”

When people regard every news story involving their side as a referendum on the truth value of their side’s ideology as a whole, it’s easy to see why they might have a hard time condemning wrongdoers on their own side, even when such criticism is deserved – or why they might struggle to give the other side credit when credit is due. This is what ideological tribalism is all about.

(Another quick disclaimer, by the way: You may be reading this at some point in the future where social issues like race aren’t as all-consuming – the big issue of your day might be a big war or financial crisis or something – but at the time of this writing, at least, they’re the main issues dominating the political discourse, so for better or worse, a lot of the quotations cited here are going to involve things like racial justice and neo-Nazi rallies and so forth. Having said that, for most of these examples you could substitute other contentious issues like capitalism vs. socialism or pro-choice vs. pro-life and they’d apply just as well.)

The irony here is that from a neutral outside standpoint, a person’s inability to notice their own double standards and openly admit when someone on their side acts wrongly reflects even worse on their side than if they simply shrugged and said, “Oh yeah, obviously that person doesn’t represent what I believe in and their actions are clearly counterproductive to the cause.” Someone who can’t bring themselves to acknowledge clear-cut cases of wrongdoing only makes themselves look guilty by association, unable to assess things reasonably or impartially.

But within the tribalist mindset, “assessing things reasonably and impartially” isn’t always the goal. It’s not a matter of trying to dispassionately weigh different ideas to see which one is most accurate, like a judge at a boxing match – it’s more like being one of the boxers. You’re not a judge in the battle of ideas, you’re fighting in it – and when you’re in a fight, the point isn’t to make the right judgment; the point is to win. As Eliezer Yudkowsky writes:

Politics is an extension of war by other means. Arguments are soldiers. Once you know which side you’re on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it’s like stabbing your soldiers in the back – providing aid and comfort to the enemy. People who would be level-headed about evenhandedly weighing all sides of an issue in their professional life as scientists, can suddenly turn into slogan-chanting zombies when there’s a Blue or Green position on an issue.

And David Wong agrees:

We don’t ingest facts to educate ourselves; we do it so that we have ammunition to use against the opposing tribe. Therefore, it doesn’t matter if we’re oversimplifying or skewing (we’ll point out that guns kill 30,000 Americans every year but omit that two-thirds of those are suicides) because we know our side is right and winning is all that matters.

There are much better ways of thinking, of course. According to Brennan’s theory of political behavior, there are three primary approaches a person might adopt when it comes to their ideological views – the “hobbit” approach, the “hooligan” approach, or the “vulcan” approach – and of these three, the third is decidedly better than the first two. It’s basically the equivalent of putting yourself in the role of judge rather than fighter. Unfortunately, though, the majority of people never make it to the more advanced “vulcan” category; they tend to get bogged down at the “hobbit” or “hooligan” stage:

  • Hobbits are mostly apathetic and ignorant about politics. They lack strong, fixed opinions about most political issues. Often they have no opinions at all. They have little, if any, social scientific knowledge; they are ignorant not just of current events but also of the social scientific theories and data needed to evaluate as well as understand these events. Hobbits have only a cursory knowledge of relevant world or national history. They prefer to go on with their daily lives without giving politics much thought. In the United States, the typical nonvoter is a hobbit.
  • Hooligans are the rabid sports fans of politics. They have strong and largely fixed worldviews. They can present arguments for their beliefs, but they cannot explain alternative points of view in a way that people with other views would find satisfactory. Hooligans consume political information, although in a biased way. They tend to seek out information that confirms their preexisting political opinions, but ignore, evade, and reject out of hand evidence that contradicts or disconfirms their preexisting opinions. They may have some trust in the social sciences, but cherry-pick data and tend only to learn about research that supports their own views. They are overconfident in themselves and what they know. Their political opinions form part of their identity, and they are proud to be a member of their political team. For them, belonging to the Democrats or Republicans, Labor or Tories, or Social Democrats or Christian Democrats matters to their self-image in the same way being a Christian or Muslim matters to religious people’s self-image. They tend to despise people who disagree with them, holding that people with alternative worldviews are stupid, evil, selfish, or at best, deeply misguided. Most regular voters, active political participants, activists, registered party members, and politicians are hooligans.
  • Vulcans think scientifically and rationally about politics. Their opinions are strongly grounded in social science and philosophy. They are self-aware, and only as confident as the evidence allows. Vulcans can explain contrary points of view in a way that people holding those views would find satisfactory. They are interested in politics, but at the same time, dispassionate, in part because they actively try to avoid being biased and irrational. They do not think everyone who disagrees with them is stupid, evil, or selfish.

These are ideal types or conceptual archetypes. Some people fit these descriptions better than others. No one manages to be a true vulcan; everyone is at least a little biased. Alas, many people fit the hobbit and hooligan molds quite well. Most Americans are either hobbits or hooligans, or fall somewhere in the spectrum in between.

Brennan’s description of hooligans as being like “rabid sports fans” is particularly appropriate here. One of the first studies to uncover the extent of this psychological bias actually involved gauging fans’ reactions to a football game, as Tim Harford describes:

An early indicator of how tribal our logic can be was a study conducted in 1954 by Albert Hastorf, a psychologist at Dartmouth, and Hadley Cantril, his counterpart at Princeton. Hastorf and Cantril screened footage of a game of American football between the two college teams. It had been a rough game. One quarterback had suffered a broken leg. Hastorf and Cantril asked their students to tot up the fouls and assess their severity. The Dartmouth students tended to overlook Dartmouth fouls but were quick to pick up on the sins of the Princeton players. The Princeton students had the opposite inclination. They concluded that, despite being shown the same footage, the Dartmouth and Princeton students didn’t really see the same events. Each student had his own perception, closely shaped by his tribal loyalties. The title of the research paper was “They Saw a Game”.

A more recent study revisited the same idea in the context of political tribes. The researchers showed students footage of a demonstration and spun a yarn about what it was about. Some students were told it was a protest by gay-rights protesters outside an army recruitment office against the military’s (then) policy of “don’t ask, don’t tell”. Others were told that it was an anti-abortion protest in front of an abortion clinic.

Despite looking at exactly the same footage, the experimental subjects had sharply different views of what was happening – views that were shaped by their political loyalties. Liberal students were relaxed about the behaviour of people they thought were gay-rights protesters but worried about what the pro-life protesters were doing; conservative students took the opposite view. As with “They Saw a Game”, this disagreement was not about the general principles but about specifics: did the protesters scream at bystanders? Did they block access to the building? We see what we want to see – and we reject the facts that threaten our sense of who we are.

Brennan drives home the parallel between sports fandom and political partisanship:

[Ilya] Somin has a good analogy: some people are political fans. Sports fans enjoy rooting for a team. […] Sports fans, however, also tend to evaluate information in a biased way. They tend to “play up evidence that makes their team look good and their rivals look bad, while downplaying evidence that cuts the other way.”

This is what tends to happen in politics. People tend to see themselves as being on team Democrat or team Republican, team Labor or team Conservative, and so on. They acquire information because it helps them root for their team and against their hated rivals. If the rivalry between Democratic and Republican voters sometimes seems like the rivalry between Red Sox and Yankees fans, that’s because from a psychological point of view, it very much is.

Holding unconscious double standards like this may not be such a big deal when the context is a meaningless sports game. Nobody will hold it against you if you’re a “homer” who supports their team even if it means indulging in a little self-delusion. But this kind of team spirit can have devastating consequences when it seeps into other areas of contention, such as international relations, where the stakes can be matters of life and death. To quote Orwell again:

All nationalists have the power of not seeing resemblances between similar sets of facts. A British Tory will defend self-determination in Europe and oppose it in India with no feeling of inconsistency. Actions are held to be good or bad, not on their own merits, but according to who does them, and there is almost no kind of outrage – torture, the use of hostages, forced labour, mass deportations, imprisonment without trial, forgery, assassination, the bombing of civilians – which does not change its moral colour when it is committed by ‘our’ side.

Troublingly, this pattern permeates practically every contentious issue in our society, regardless of whether the stakes are low or high. Simler and Hanson give a more recent example:

During the Bush administration, U.S. antiwar protestors – most of whom were liberal – justified their efforts in terms of the harms of war. And yet when Obama took over as president, they drastically reduced their protests, even though the wars in Iraq and Afghanistan continued unabated. All this suggested an agenda that was more partisan than pacifist.

McRaney mentions a similar case relating to economic attitudes:

Remember all that handwringing about economic insecurity in red states as political motivation to vote one way or the other? Recent analysis by behavioral economist Peter Atwater has found that almost all of that economic insecurity has evaporated since Trump became president, despite the fact that nothing has changed economically in those places where it was once a supposedly major concern. This suggests that people’s political behavior was driven by tribal psychology, like it usually is, but justified by whatever seems salient at the time, like it usually is. Once their “tribal mood,” as he put it, improved, so did their feelings about the economy.

And Zaid Jilani adds:

“Americans are dividing themselves socially on the basis of whether they call themselves liberal or conservative, independent of their actual policy differences,” [according to Lilliana Mason].

[…]

She noted, for instance, that Americans who identify most strongly as conservative, whether they hold more left-leaning or right-leaning positions on major issues, dislike liberals more than people who more weakly identify as conservatives but may hold very right-leaning issue positions.

The loose connection some voters have with policy preferences has become apparent in recent years. Donald Trump managed to flip a party from support of free trade to opposition to it by merely taking the opposite side of the issue. Democrats, meanwhile, mocked Mitt Romney in 2012 for calling Russia the greatest geopolitical adversary of the United States, but now have flipped and see Russia as exactly that. Regarding health care, the structure of the Affordable Care Act was initially devised by the conservative Heritage Foundation and implemented in Massachusetts as “Romneycare.” Once it became Obamacare, the Republican team leaders deemed it bad, and thus it became bad.

Mason believes the implications of such shallow divisions between people could make the work of democracy harder. If your goal in politics is not based around policy but just defeating your perceived enemies, what exactly are you working toward? (Is it any surprise there is an entire genre of campus activism dedicated to simply upsetting your perceived political opponents?)

“The fact that even this thing that’s supposed to be about reason and thoughtfulness and what we want the government to do, the fact that even that is largely identity-powered, that’s a problem for debate and compromise and the basic functioning of democratic government. Because even if our policy attitudes are not actually about what we want the government to do but instead about who wins, then nobody cares what actually happens in the government,” Mason said. “We just care about who’s winning in a given day. And that’s a really dangerous thing for trying to run a democratic government.”

When you’re locked into this kind of mindset, actual ideas and their consequences are really only a secondary consideration. The main thing is just to beat the other side – so regardless of what they actually say or do, your natural impulse will be to come up with reasons to condemn it; and regardless of what your side says or does, your impulse will be to come up with reasons to support it. It’s not a matter of developing broad overarching principles that can be applied across all different situations and contexts – it’s not even a matter of being consistent in your beliefs at all – it’s just a matter of taking whatever stance is most likely to help your side at the moment. Alexander writes:

An idea I keep stressing here […] is that people rarely have consistent meta-level principles. Instead, they’ll endorse the meta-level principle that supports their object-level beliefs at any given moment. The example I keep giving is how when the federal government was anti-gay, conservatives talked about the pressing need for federal intervention and liberals insisted on states’ rights; when the federal government became pro-gay, liberals talked about the pressing need for federal intervention and conservatives insisted on states’ rights.

He quotes another particularly revealing example from Tetlock and Dan Gardner:

When hospitals created cardiac care units to treat patients recovering from heart attacks, [Archie] Cochrane proposed a randomized trial to determine whether the new units delivered better results than the old treatment, which was to send the patient home for monitoring and bed rest. Physicians balked. It was obvious the cardiac care units were superior, they said, and denying patients the best care would be unethical. But Cochrane was not a man to back down…he got his trial: some patients, randomly selected, were sent to the cardiac care units while others were sent home for monitoring and bed rest. Partway through the trial, Cochrane met with a group of the cardiologists who had tried to stop his experiment. He told them that he had preliminary results. The difference in outcomes between the two treatments was not statistically signficant, he emphasized, but it appeared that patients might do slightly better in the cardiac care units. “They were vociferous in their abuse: ‘Archie,’ they said, ‘we always thought you were unethical. You must stop the trial at once.’” But then Cochrane revealed he had played a little trick. He had reversed the results: home care had done slightly better than the cardiac units. “There was dead silence and I felt rather sick because they were, after all, my medical colleagues.”

This story is the key to everything. See also my political spectrum quiz and the graph that inspired it. Almost nobody has consistent meta-level principles. Almost nobody really has opinions like “this study’s methodology is good enough to believe” or “if one group has a survival advantage of size X, that necessitates stopping the study as unethical”. The cardiologists sculpted their meta-level principles around what best supported their object-level opinions – that more cardiology is better – and so generated the meta-level principles “Cochrane’s experiment is accurate” and “if one group has a slight survival advantage, that’s all we need to know before ordering the experiment stopped as unethical.” If Cochrane had (truthfully) told them that the cardiology group was doing worse, they would have generated the meta-level principles “Cochrane’s experiment is flawed” and “if one group has a slight survival advantage that means nothing and it’s just a coincidence”. In some sense this is correct from a Bayesian point of view – I interpret sonar scans of Loch Ness that find no monsters to be probably accurate, but if a sonar scan did find a monster I’d wonder if it was a hoax – but in less obvious situations it can be a disaster. Cochrane understood this and so fed them the wrong data and let them sell him the rope he needed to hang them.

When it comes to things like politics and religion, a lot of the reason for this kind of inconsistency may just be due to the fact that (as mentioned earlier) people don’t always give much thought to what their stance on a particular issue actually is until they’re asked about it directly. As Alexander puts it:

We don’t have a good explicit understanding of what high-level principles we use, and tend to make them up on the spot to fit object-level cases.

If you consider relatively esoteric political issues like gerrymandering or filibustering, for instance, these probably aren’t the kind of issues that most people have spent enough time thinking about to have formulated a coherent stance on. If you ask them how they feel about these issues, then, their responses will probably depend on the context. If it’s a situation where the opposing side is the one using these tactics to give themselves a political advantage, the person will probably lament that these practices are dishonorable perversions of the democratic process, tantamount to outright cheating – but if it’s their own side that’s using them, they’ll probably shrug their shoulders and say that these tactics are a perfectly legitimate part of “how the game is played,” and that exploiting them is just smart politics.

You may have even found yourself in such a situation before, where you haven’t necessarily spent a lot of time thinking about the question at hand in advance, but you don’t want to embarrass yourself by not being able to give a confident opinion on the issue, so you just make a snap judgment and default to your usual assumption that whatever your side is doing must be right and whatever the other side is doing must be wrong. This kind of ad hoc approach might serve you well in the short term – the fact that you were able to provide an immediate answer can do a fairly good job simulating the certitude that comes with having researched the issue on your own and come to your own conclusions. But if you haven’t actually thought through all the implications in advance and made sure that the stance you’re taking is consistent with the rest of your worldview, it can come back to bite you later on when the roles are reversed and the shoe is on the other foot. Once you’ve staked yourself to the idea that gerrymandering is a valid political strategy, or that states’ rights should take priority over federal mandates, or that deliberately targeting civilians in warfare can be a justifiable tactic, you may find yourself wishing you could take it back once your enemies are the ones gerrymandering your state or bombing your neighborhood.

No doubt, it’s important to be able to stand up for your beliefs. Fighting for your beliefs can even be downright heroic. But that’s only true if what you believe in is actually the right thing to be fighting for. If you’re fighting for the wrong thing – even if your intentions are pure – then there’s a very good chance that you’ll end up painting yourself into a corner and your arguments will lead you somewhere you don’t want to go. That’s why it’s critically important to be willing to actually sit down and figure out exactly what the right positions are on every issue you decide to get involved with – not just which positions will score the most points for your side, but what positions are really true. Just having conviction in your beliefs isn’t enough. Just fighting hard for them isn’t enough. As Alexander writes:

Wrong people can be just as loud as right people, sometimes louder. If two doctors are debating the right diagnosis in a difficult case, and the patient’s crazy aunt hires someone to shout “IT’S LUPUS!” really loud in front of their office all day, that’s not exactly helping matters. If a group of pro-lupus protesters block the entry to the hospital and refuse to let any of the staff in until the doctors agree to diagnose lupus, that’s a disaster. All that passion does is use pressure or even threats to introduce bias into the important work of debate and analysis.

Thoughtful analysis alone isn’t enough either, of course. Once you’ve figured things out to the best of your ability, the next step is to undertake the hard work of putting those ideas into action. But the crucial point here is that you can’t put the cart before the horse. Step Two has to come after Step One. Unfortunately, too many people nowadays are like soldiers who yearn to fight for their country, but don’t have enough interest in foreign policy to first figure out whether the war they’re going to be fighting in is ethically justifiable. They’re just so captivated by the idea of being a hero for their side that they don’t spend enough time thinking about whether the hill they’ve chosen to die on is the right one. To borrow an analogy from Julia Galef, they get so fixated on trying to play this role of soldier – fighting as aggressively as possible to win a particular piece of territory – that they end up doing more harm than good (both to themselves and others) in situations where their mindset shouldn’t have been that of soldier but of scout – not trying to actively attack or defend, but simply trying to survey the territory and form the most accurate map of it possible. And the result of this mistake is that, all too often, they end up with a worldview that’s badly distorted. They end up with a map that doesn’t match the territory they’re trying to navigate, so to speak – and consequently, they end up getting themselves lost.

Even so, a lot of them choose to just keep plowing ahead regardless, because once again, it all comes back to the point that total accuracy and impartiality isn’t always necessarily the goal. People can have other objectives, and when it comes to ideological disputes, the priority that usually takes precedence is sticking up for their chosen side. As Julie Beck writes:

Though false beliefs are held by individuals, they are in many ways a social phenomenon. [The followers of Marian Keech] held onto their belief that the spacemen were coming […] because those beliefs were tethered to a group they belonged to, a group that was deeply important to their lives and their sense of self.

[Daniel] Shaw describes the motivated reasoning that happens in these groups: “You’re in a position of defending your choices no matter what information is presented,” he says, “because if you don’t, it means that you lose your membership in this group that’s become so important to you.” Though cults are an intense example, Shaw says people act the same way with regard to their families or other groups that are important to them.

[…]

In one particularly potent example of party trumping fact, when shown photos of Trump’s inauguration and Barack Obama’s side by side, in which Obama clearly had a bigger crowd, some Trump supporters identified the bigger crowd as Trump’s. When researchers explicitly told subjects which photo was Trump’s and which was Obama’s, a smaller portion of Trump supporters falsely said Trump’s photo had more people in it.

While this may appear to be a remarkable feat of self-deception, Dan Kahan thinks it’s likely something else. It’s not that they really believed there were more people at Trump’s inauguration, but saying so was a way of showing support for Trump. “People knew what was being done here,” says Kahan, a professor of law and psychology at Yale University. “They knew that someone was just trying to show up Trump or trying to denigrate their identity.” The question behind the question was, “Whose team are you on?”

In these charged situations, people often don’t engage with information as information but as a marker of identity. Information becomes tribal.

In a New York Times article called “The Real Story About Fake News Is Partisanship,” Amanda Taub writes that sharing fake news stories on social media that denigrate the candidate you oppose “is a way to show public support for one’s partisan team – roughly the equivalent of painting your face with team colors on game day.”

This sort of information tribalism isn’t a consequence of people lacking intelligence or of an inability to comprehend evidence. Kahan has previously written that whether people “believe” in evolution or not has nothing to do with whether they understand the theory of it – saying you don’t believe in evolution is just another way of saying you’re religious. [When test subjects were incentivized to temporarily suspend their group loyalties – either through monetary compensation or just by being asked to get the correct answers as best they could – their test scores correlated with their level of education. Otherwise, their scores correlated with their level of religiosity.] Similarly, a recent Pew study found that a high level of science knowledge didn’t make Republicans any more likely to say they believed in climate change, though it did for Democrats.

Kahan himself elegantly sums it up:

When [these people are] being asked about those things, they are not telling you what they know. They are telling you who they are.

It’s actually kind of remarkable how much our beliefs can be shaped by our group allegiances, rather than the other way around. The desire to see things in a way that harmonizes with your group’s consensus viewpoint can not only be powerful enough to override your better judgment – it can even override the most basic evidence of your own five senses. Brennan recounts one of psychology’s most famous group conformity experiments:

In Solomon Asch’s experiment, eight to ten students were shown sets of lines in which two lines were obviously the same length, and the others were obviously of different length. They were then asked to identify which lines matched. In the experiment, only one member of the group is an actual subject; the rest are collaborators. As the experiment proceeds, the collaborators begin unanimously to select the wrong line.

Asch wanted to know how the experimental subjects would react. If nine other students are all saying that lines A and B, which are obviously different, are the same length, would subjects stick to their guns or instead agree with the group? Asch found that about 25 percent of the subjects stuck to their own judgment and never conformed, about 37 percent of them caved in, coming to agree completely with the group, and the rest would sometimes conform and sometimes not. Control groups responding privately in writing were only one-fifth as likely to be in error. These results have been well replicated.

For a long time, researchers wondered whether the conformists were lying or not. Were they just pretending to agree with the group, or did they actually believe that the nonidentical lines were identical because the group said so. Researchers recently repeated a version of the experiment using functional magnetic resonance imaging. By monitoring the brain, they might be able to tell whether subjects were making an “executive decision” to conform to the group, or whether their perceptions actually changed. The results suggest that many subjects actually come to see the world differently in order to conform to the group. Peer pressure might distort their eyesight, not just their will.

These findings are frightening. People can be made to deny simple evidence right in front of their faces (or perhaps even come to actually see the world differently) just because of peer pressure. The effect should be even stronger when it comes to forming political beliefs.

And sure enough, that does seem to be the case. As Aurora Dagny writes of her experience belonging to radical political groups, members are often reluctant to say (or even think) anything that deviates even slightly from the consensus group ideology, for fear of being been seen as disloyal:

Every minor heresy inches you further away from the group. […] Conversely, showing your devotion to the cause earns you respect. Groupthink becomes the modus operandi. When I was part of groups like this, everyone was on exactly the same page about a suspiciously large range of issues. Internal disagreement was rare.

When you’re involved in an ideological group like this, you may be aware at some level that you’re fudging your own internal beliefs a little bit in order to support your group’s consensus narrative – or you may not be consciously aware of it at all. But either way, you’re being motivated by the same underlying incentives; you’d rather be the steadfast loyalist who always stands up for the “good side” than the pedantic hairsplitter who always insists on being technically correct even when it undermines the movement’s credibility and annoys everyone in the group. You’d rather be too committed to a cause you know is right than not committed enough – even if it means suppressing what you actually think is true in the back of your mind on those rare occasions when it contradicts the group.

IV.

It seems to me that this desire to publicly signal your loyalty to the right side – to show that you’re “one of the good guys” – underlies a huge proportion of the social, political, religious, and moral action going on in the world today. No doubt, a lot of it is also driven by a genuine desire to do good, to serve truth, and to promote the causes of virtue and justice for their own sake. When we decide to pledge our loyalty to one ideological side over another, these kinds of noble motives really can be key factors in determining which side we end up taking. But I think it’d be naïve to believe that they’re the only factors motivating our behavior; things like social status and the desire for prestige are almost always playing a role at some level as well. It’s not always enough just to be on the good side fighting the good fight; at a subconscious level, we also want others to see that we’re doing so. Here’s Alexander again:

It seems likely that everyone in politics is being a bit self-deceptive – this won’t come as a surprise to anyone who reads Robin Hanson. Most people discuss political ideas not in order to help other people, but in order to signal how concerned and intelligent they are, or as part of group bonding rituals. Otherwise they wouldn’t be posting “I HATE ABORTION SO MUCH” on Facebook to see how many “likes” they can get, they’d be out canvassing door-to-door or (even better) just working overtime at their job to donate money to anti-abortion charities without ever mentioning it to anyone. Certainly the average person who puts an “Abortion Stops A Beating Heart” bumper sticker on their car isn’t doing it because they have theory by which their action later results in babies being saved (or women being oppressed, or anything at all happening in the external world). I can even notice this sort of process happening in my own head in real time when I think about efficient charity.

Yet all of these actions manifest on a conscious level as being genuinely concerned about the issue I’m talking about, or putting up a bumper sticker about, or donating money to.

So we have a model of the brain that includes at least two levels: a surface honest level, where you really care about fetuses, and a deep signaling level, where you just want to impress the other people in your church and signal to yourself that you are a compassionate caring person.

This seems to have become especially true in the age of social media. A few generations ago, you might have had a few scattered opportunities to signal your ideological loyalties throughout a typical day – maybe (say) chatting with your spouse over the morning newspaper, or sharing opinions with co-workers over after-work drinks – but it wasn’t nearly the kind of mass social experience it is today, where every news story is accompanied by a flood of millions of people weighing in with their opinions simultaneously, and where posting one punchy quip can earn you thousands of likes from friends and strangers alike. In this modern environment, being engaged with the big issues is no longer just about privately holding the right opinions or voting for the right people; as Geoffrey Miller puts it, the name of the game nowadays is “ideological self-advertisement.” If you go out of your way to be conspicuously vocal with your religious or political beliefs online, it can serve as a badge of commitment to your side, and as a way of making sure your declarations of principle are always at the center of everyone’s attention, in a way that wouldn’t be possible if you were just discussing your ideas the old-fashioned way. Having a public platform can amplify your feelings of self-importance – and accordingly, it can make you want to weigh in on every topic, even the ones you don’t particularly know much about, just so you can be seen to have weighed in on them.

The sense of satisfaction that comes with signaling your commitment to the good side – and the positive feedback you get from others as a result – can feel so rewarding that it can become an end in itself. You may have everyone (including yourself) convinced that the ideas and issues themselves are all that you really care about – but at some level, your outspokenness may be coming more from a place of just wanting to participate in the conversation and “show your stuff” intellectually. According to Simler and Hanson, that’s how a lot of ideological activism is these days – it’s not really so much about producing actual changes in the system, as it claims to be; it’s more like an act of consumption by its participants, a way of indulging themselves by giving the world a piece of their mind while simultaneously being able to congratulate themselves for their civic-mindedness. As Wong writes:

Last year’s controversies are boring. Do we still have troops in Afghanistan? Is Flint’s water safe to drink? Did the DACA thing get resolved? What happened with all of those refugees that used to be in the news every day?

It doesn’t matter. We’ve moved on to the next [topic], because many of us aren’t doing this to save the world – we’re doing it to keep ourselves entertained. Up-vote the stuff we disagree with, snark at the stuff we don’t, watch the Likes accumulate, and convince ourselves we won. It’s all game, something to kill time.

And Ezra Klein echoes this sentiment:

There is a danger […] that politics becomes an aesthetic rather than a program. It’s a danger on the right, where Donald Trump modeled a presidency that cared more about retweets than bills. But it’s also a danger on the left, where the symbols of progressivism are often preferred to the sacrifices and risks those ideals demand.

This isn’t necessarily limited to people’s behaviors on social media, either. Alexander notices how this kind of mentality can become prevalent not just in the online world, but in real-life contexts as well, including a lot of the mass political demonstrations that have made the news recently:

One thing that did strike me was this tweet about [how things like political demands and threats to power have been supplanted by a] focus on funny signs and who had the best costume. It seems to me that if we were protesting something genuinely awful (like a genocide abroad), we wouldn’t wear silly costumes and funny signs. Does that mean that a decision to go ahead with the signs and costumes reflects some kind of subconscious feeling that this isn’t really that bad, or a motivation springing from something other than true outrage?

John McWhorter makes the critique even more sharply, alluding to the pragmatism of the early Civil Rights movement for contrast:

[Regarding certain modern forms of mass protest,] what began as concrete activism aimed at getting justice devolved into abstract gestures unconcerned with justice. […] Many today genuinely think that the gestures are activism.

[…]

In the 1960s, civil rights leader Bayard Rustin was dismayed by a new breed of separatist black leaders. They shunned concrete, proactive lobbying and careful rhetorical suasion, instead preferring high-profile altercations, preferably involving getting arrested. In 1963, Rustin counseled the increasingly radicalized Student Nonviolent Coordinating Committee that “the ability to go to jail should not be substituted for an overall social reform program.” In Rustin’s eyes, these scenes were ultimately, as he put it, “gimmicks.” The typical demonstration often had “no relation to the fundamental question of how to get rid of discrimination” and was just “an end in itself.” In other words, Rustin was watching activism devolve into mere gesture.

[…]

Philosophical writer Lee Harris recalls a friend who joined a disruptive protest against the Vietnam War involving laying down in front of cars crossing a bridge. The friend was openly unconcerned with whether it would help lead to America’s withdrawing from the conflict, participating instead because it would be “good for his soul”:

He had no interest in changing the minds of these commuters, no concern over whether they became angry at the protesters or not. They were merely there as props, as so many supernumeraries in his private political psychodrama. The protest for him was not politics, but theater; the significance of his role lay not in the political ends his actions might achieve but rather in their symbolic value as ritual.

And Nathan Heller drives the point home:

In “Inventing the Future: Postcapitalism and a World Without Work” (Verso), a book published in 2015, then updated and reissued this past year for reasons likely to be clear to anyone who has opened a newspaper, Nick Srnicek and Alex Williams question the power of marches, protests, and other acts of what they call “folk politics.” These methods, they say, are more habit than solution. Protest is too fleeting. It ignores the structural nature of problems in a modern world. “The folk-political injunction is to reduce complexity down to a human scale,” they write. This impulse promotes authenticity-mongering, reasoning through individual stories (also a journalistic tic), and a general inability to think systemically about change. In the immediate sense, a movement such as Occupy wilted because police in riot gear chased protesters out of their spaces. But, really, the authors insist, its methods sank it from the start by channelling the righteous sentiments of those involved over the mechanisms of real progress.

“This is politics transmitted into pastime – politics-as-drug-experience, perhaps – rather than anything capable of transforming society,” Srnicek and Williams write. “If we look at the protests today as an exercise in public awareness, they appear to have had mixed success at best. Their messages are mangled by an unsympathetic media smitten by images of property destruction – assuming that the media even acknowledges a form of contention that has become increasingly repetitive and boring.”

Boring? Ouch.

As Mark Lilla puts it, such activism “is largely expressive, not persuasive.” It isn’t so much concerned with accomplishing its stated socio-political goals, but more with providing a cathartic outlet for its participants to proclaim their opinions in a public forum, and to get some measure of recognition for doing so. If these actions actually were to accomplish their participants’ stated socio-political goals, of course, then so much the better; but if that were the main objective, it’s unlikely that these public demonstrations would take the form that they do. At best, the most charitable way to describe them is to say that they’re purely symbolic. Another term sometimes used to describe this kind of behavior, though, is “recreational outrage” – which may sound like an oxymoron, but I think accurately encapsulates a real phenomenon.

(An even more derisive way of describing this behavior is to dismiss it as little more than a form of glorified LARPing (live action role-playing). It has become increasingly common for liberals to say that pro-gun activists are just “playing soldier” when they openly carry their weapons and strut around like badasses, and for conservatives to say that social justice activists are just acting out an imaginary role as modern-day civil rights heroes when they endlessly indulge in overdramatic speeches and demonstrations, and so on. A lot of these accusations are unfair (and often only serve to make things worse when leveled as open criticisms). But sometimes there can actually be a kernel of truth to them. As Gwern Branwen writes, there’s a good case to be made that even the most extreme ideologues – like jihadists – are often driven not so much by a genuine feeling of necessity as by the personal thrill of fighting for their cause; they often seem to be playing “fantasy jihad” (as a means of socialization and status-seeking among themselves) more than anything else.)

Now, obviously, none of this is to say that ideological outrage can never be justifiable. If the things you’re protesting against really are outrageous, then what further reason do you need to justify being outraged in response? Nor is it to say that mass protest can never be a productive reaction to that outrage. When mass protest is actually well-targeted, it can be a genuinely effective way of drawing people’s attention to issues and incidents they might otherwise have overlooked; and if it’s part of a broader effort that includes real, concrete follow-up measures, it can (if nothing else) help disincentivize whatever’s being protested against by showing what significant backlash it will create if it continues to happen (in terms of electoral outcomes and so on).

Still though, when these positive factors aren’t entirely present, it can be hard to differentiate between protesting in a way that actually makes a positive impact, and merely telling yourself that you’re making a positive impact as an excuse to go out and protest. And it can be equally hard to disentangle genuine outrage from the kind geared more toward group signaling – because after all, visible outrage really is a highly effective signaling tool. One of the best ways to demonstrate your virtue and your commitment to the side of good is to make it clear that the issues at hand are more than mere intellectual abstractions to you – they affect you on a deep emotional level. To you, these problems aren’t just practical matters to be addressed through logic and pragmatism – you take them personally. The more outrage you’re able to muster toward any given issue, then, the more it conveys the message that you care about that issue. And if the issue is one of significant social, moral, or religious importance, then that can only mean that you must be all the more noble and admirable for being so deeply invested in such a worthy cause.

That feeling, that jolt of self-righteousness that comes with ideological outrage, can be invigorating and even empowering. We all want to be part of something bigger than ourselves, as commenter 8footpenguin relates:

Maybe this is kind of a silly example, but it’s what came to mind. I remember watching movies like Remember the Titans and you get all these really warm feelings when the black and white kids start to bond, and when the white players show deference and respect to a black coach. When the context is this very ugly sort of background environment and you see the contrast of people becoming virtuous and bonding together, it’s a powerful emotion. However, there is also this – somewhat perverse – feeling I’ll admit to having that you almost wish you’d find yourself in such a situation so you could be part of this kind of virtuous, admirable thing.

It’s not unlike the feeling I had watching Band of Brothers like “Damn, I’ll never do anything as heroic and romantic and just staggeringly awesome as these guys parachuting in on D-Day.” It’s a little uncomfortable to realize your actions and life are just kind of mundane by comparison.

I think when we’re college aged we’re especially vulnerable to these kinds of pretensions and this desire to distinguish ourselves and take part in grand admirable things.

I think overall these are good qualities. Surely a huge part of human success is our deep, unborn admiration for altruism, virtue and sacrifice and the corresponding desire to exemplify those traits.

I’ve been guilty of throwing around the phrase virtue-signaling, but I think now that’s not really fair. I don’t think people are cynically wanting to appear virtuous. I think they truly want to be virtuous, which is a good thing. But we should be driven, ultimately, by a practical sense of doing good, effective work regardless of whether or not it requires any impressive acts of virtue.

Like I said before, it is undoubtedly admirable to fight for a righteous cause. But the fact that you know you’re fighting for a righteous cause, and are consciously aware of the nobility of your actions, can complicate things, as David Brin discusses:

I want to zoom down to a particular emotional and psychological pathology. The phenomenon known as self-righteous indignation.

We all know self-righteous people. (And, if we are honest, many of us will admit having wallowed in this state ourselves, either occasionally or in frequent rhythm.) It is a familiar and rather normal human condition, supported – even promulgated – by messages in mass media.

While there are many drawbacks, self-righteousness can also be heady, seductive, and even… well… addictive. Any truly honest person will admit that the state feels good. The pleasure of knowing, with subjective certainty, that you are right and your opponents are deeply, despicably wrong.

Sanctimony, or a sense of righteous outrage, can feel so intense and delicious that many people actively seek to return to it, again and again. Moreover, as Westin et.al. have found, this trait crosses all boundaries of ideology. (I discuss this general effect in The Transparent Society.)

Indeed, one could look at our present-day political landscape and argue that a relentless addiction to indignation may be one of the chief drivers of obstinate dogmatism and an inability to negotiate pragmatic solutions to a myriad modern problems. It may be the ultimate propellant behind the current “culture war.”

Dagny echoes these sentiments, again recounting her experience as part of an ideological movement based largely on outrage:

High on their own supply, activists in these organizing circles end up developing a crusader mentality: an extreme self-righteousness based on the conviction that they are doing the secular equivalent of God’s work. It isn’t about ego or elevating oneself. In fact, the activists I knew and I tended to denigrate ourselves more than anything. It wasn’t about us, it was about the desperately needed work we were doing, it was about the people we were trying to help. The danger of the crusader mentality is that it turns the world in a battle between good and evil. Actions that would otherwise seem extreme and crazy become natural and expected. I didn’t think twice about doing a lot of things I would never do today.

And this is the real problem – once you’ve made your mind up that your level of visible outrage is a symbolic badge representing your level of commitment to your cause, the natural conclusion of this logic is that you therefore need to maximize your level of anger, to show everyone that you are the most outraged, dammit. You need to show that you are willing to stop at nothing to achieve your aims, because they really are that important to you. And if anyone disagrees with you, well then, this just means they are an enemy of the good, and must be vanquished.

After all, that’s the other thing about being part of a group; if you want to feel like you’re part of a special club, it’s not enough simply that you be included – others have to be excluded. You can’t have an in-group without having an out-group; and you can’t consider yourself a hero unless you have a villain to fight against. As Mason explains:

In order to have society, you have to have some reason for membership. There have to be some rules for being a part of a group of people, and if you’re part of a group of people, that group has to have boundaries. If it doesn’t have boundaries then you’re not really a group; you’re just everybody. And there’s a scholar, Marilyn Brewer, who basically defined this by saying we have a psychological need for both inclusion and exclusion. So, we need to feel that we are part of something, but we also need to feel that not anybody can be part of it in order for us to feel important ourselves. We need to feel like we’re included in some group, and that there are outsiders. There are some people that don’t get in.

V.

Really, then, there are two sides of the coin when it comes to convincing yourself that you’re fighting a hero’s fight.

The first is the idea that your side is the persecuted underdog, the scrappy band of dissidents bravely taking a stand against oppressive forces stronger than itself. Every group uses this paradigm to represent itself – from liberals fighting against the exploitation of corporate overlords, to conservatives fighting against the heavy hand of the state; from Christians fighting against the replacement of prayer in public schools with secular curricula, to secularists fighting against the use of religious justifications to deny sexual and reproductive rights. And mind you, a lot of the people making these claims really are the victims of various forms of mistreatment. Some of them really are persecuted underdogs. In fact, it’s even possible to have two opposing groups both feel like they’re being marginalized, and both be right at the same time, just in different ways. (As Ezra Klein notes, for instance, a fairly consistent rule of thumb is that the allocation of America’s political power tends to run a decade behind its demographics (i.e. it tends to skew older, whiter, more conservative, and more Christian than average), while cultural power – the messaging in academia and popular media and so on – tends to run a decade ahead of demographics (i.e. it tends to skew more toward younger, more diverse, more liberal, and more secular sensibilities). So both sides feel like they’re on the defensive, because they’re focusing on the areas where they actually are on the defensive – conservatives in terms of popular culture, and liberals in terms of political power. As Klein writes, “the Left feels a cultural and demographic power that it can only occasionally translate into political power, and the Right wields political power but feels increasingly dismissed and offended culturally.”) But regardless of who actually is the underdog in any given situation, every group identifies as such (or claims to be acting on behalf of those who are). And they try their best to ensure that they’re widely perceived as such, for reasons that aren’t hard to understand. After all, when you’re part of an unjustly oppressed group, it gives you the moral high ground. When you’re being persecuted for your ideas or your identity in a way that’s clearly undeserved, it no longer matters as much what your ideas or your identity actually are – it’s self-evident that you’re the good guy and your oppressor is the bad guy.

The thing is, though, more and more people are starting to realize this (if only subconsciously), and the result is that we now have this strange culture in which there’s a kind of competition for victimization. As McWhorter puts it, people are coming “to treat victimhood not as a problem to be solved but as an identity to be nurtured.” Adopting the role of the persecuted underdog has become the go-to tactic for ideological groups; and consequently, every debate quickly devolves into what Alexander calls a “bravery debate”:

There’s a tradition on Reddit that when somebody repeats some cliche in a tone that makes it sound like she believes she is bringing some brilliant and heretical insight – like “I know I’m going to get downvoted for this, but believe we should have less government waste!” – people respond “SO BRAVE” in the comments. That’s what I mean by bravery debates. Discussions over who is bravely holding a nonconformist position in the face of persecution, and who is a coward defending the popular status quo and trying to silence dissenters.

These are frickin’ toxic. I don’t have a great explanation for why. It could be a status thing – saying that you’re the original thinker who has cast off the Matrix of omnipresent conformity and your opponent is a sheeple (sherson?) too fearful to realize your insight. Or it could be that, as the saying goes, “everyone is fighting a hard battle”, and telling someone else they’ve got it easy compared to you is just about the most demeaning thing you can do, especially when you’re wrong.

But the possible explanations aren’t the point. The point is that, empirically, starting a bravery debate is the quickest way to make sure that a conversation becomes horrible and infuriating. I’m generalizing from my own experience here, but one of the least pleasant philosophical experiences is thinking you’re bravely defending an unpopular but correct position, facing the constant persecution and prejudice from your more numerous and extremely smug opponents day in and day out without being worn-down … only to have one of your opponents offhandedly refer to how brave they are for resisting the monolithic machine that you and the rest of the unfairly-biased-toward-you culture have set up against them. You just want to scream NO YOU’RE WRONG SEFSEFILASDJO:IALJAOI:JA:O>ILFJASL:KFJ

[…]

Bravery debates tend to be so fun and addictive that they drown out everything more substantive. Sometimes they can be acceptable stand-ins for actually having an opinion at all. I constantly get far-right blogs linking to my summary of Reactionary thought, and I hope I’m not being too unfair when I detect an occasional element of “Oh, so that’s what our positions are!”. There seem to be a whole lot of Reactionaries out there who are much less certain of what they believe than that they are very brave and nonconformist for believing it.

Despite Alexander’s mention of right-wingers, most of the popular discussion of this phenomenon recently has focused on how it’s been embraced by the left, particularly on college campuses and in certain online spaces. This criticism is definitely justifiable in many cases; it may be annoying for liberals to admit, but there’s no shortage of compulsive offense-takers within their ranks who have taken their martyr complex well beyond what’s reasonable, and who are hypersensitive to anything and everything that could conceivably be interpreted as hinting at some subtle underlying prejudice or affront. As Tamler Sommers puts it:

I think one of the things you [have to] acknowledge is that there are certain people who have a kind of hysterical, overwrought reaction to, just, life – and I think it’s overhyped by the media often, but those students do exist; [and] they’re not always right; they’re not always reasonable in how they respond to what they consider to be offensive or aggressive speech.

We’ll get into this a bit more later. But like I said, this kind of behavior is far from exclusive to the left. In fact, even groups on the extreme opposite end of the political spectrum, like neo-Nazis – who you’d think would be the last people in the world to claim the mantle of victim – are doing so emphatically. As Jesse Singal observes:

When violence [against neo-Nazis] does break out, videos of it race through the internet’s white-supremacist underbelly, serving as incredibly valuable PR material. It doesn’t matter who gets the better of a given confrontation: When the Nazis get punched, it’s “proof” that anti-fascists or liberals or [insert minority group] or whoever else did the punching have it in for “innocent white Americans just trying to protest peacefully.”

To which one commenter, notallowedtopost, adds:

Yeah, this is why the “make racists afraid again” slogan I’ve seen in quite a few places around the internet doesn’t make any sense to me at all. If racists weren’t already afraid of all the scary black and brown people, they wouldn’t be racists. A huge part of their ideology is their own victimhood. Making them more afraid is just going to make them more racist.

It may be easy to scoff at these claims of victimhood and dismiss them as being rooted in ulterior motives – to think that these people are obviously just exaggerating their problems solely for cynical reasons. But it seems to me that most of the people claiming to be persecuted underdogs – whether on the left or the right, whether part of a religious sect or some other ideological group – have genuinely convinced themselves that they really are victims of at least some form of oppression. A persecution mindset, like any other form of motivated reasoning, has a way of feeding on itself; as Maria de la Guardia puts it, “If you constantly hear people say you should be outraged, offended, and traumatized, you’re more likely to be so.” (And again, it’s worth stressing that some groups really are being persecuted, so they’re right to feel oppressed – but even those who aren’t can genuinely feel like they are, and their feelings may even be perfectly understandable. Even the most privileged groups sometimes have to deal with negative backlash as a result of their privilege, and from their perspective that can feel like they’re being persecuted themselves.) So the end result is that you keep having these bizarre disputes where both sides are constantly claiming to be oppressed by the other side, and neither side is willing to grant any credibility to the other’s claims. From the perspective of a neutral onlooker, the whole situation can just seem bewildering.

For agitators on each side, though, that’s no reason to tone down or moderate their claims of victimization; on the contrary, it’s all the more incentive to escalate them as much as possible. After all, if you feel like your side really is being victimized, but the other side is proclaiming their own victimhood just as strongly, it’s not like you can just unilaterally back down; that would give the impression that your side has nothing to complain about, and that the other side really must be the oppressed one. If you want to show that your side is in fact the oppressed one, you have to show that your grievances outweigh those of the other side – and that means you have to find and draw attention to as many incidents of wrongdoing being committed against your side as you possibly can, and to claim that the severity of the oppression in each of these cases is as extreme as possible – even in cases where the offense isn’t actually all that outrageous. (This incentive can exist even if you aren’t in a victimization arms race with an opposing side, but are simply trying to capture and maintain maximal attention for your cause in its own right.) The way this often plays out, a particular group might start off with some legitimate grievances – and because they’re legitimate, these grievances will capture the popular attention and will ultimately be resolved in the group’s favor – but then, feeling like they need to continue having some cause that they can fight for, group members will turn their attention to more marginal issues that aren’t quite as open-and-shut, and will start protesting them as if they were every bit as urgent and clear-cut as the more egregious causes they had been protesting previously. (So this is how you end up with people on the left equating unpaid college football with slavery, for instance, or people on the right saying that requiring a permit to own a bazooka is tantamount to totalitarianism. One of the most popular ways of drawing people’s attention to issues that might otherwise be considered relatively minor is to exaggerate the stakes to make them look much more major.) Ultimately, what might initially start off as the group genuinely feeling like their side is under attack can subtly morph into something that’s less authentic in its urgency but still wants to portray itself as a full-blown crisis. Inevitably, though, this disparity proves unsustainable. Once the claims of victimization escalate to the point where they reach the limits of credibility, they start to overreach those limits, and at that point they start to delegitimize their own cause rather than bolstering it. In the end, relying too much on this tactic of exaggerating your grievances tends to be self-sabotaging – because although it might win you some support in the short term from onlookers who don’t know any better, it also means that the ones who do notice the disparity between the intensity of your protests and the degree to which you really are being oppressed will be that much more likely to stop taking you seriously and tune you out in the future. And when that happens, you’ll have not only undermined your own cause – you’ll have also drawn attention away from those who really are being most severely victimized.

Mark Manson gives his take on the subject:

“Victimhood chic” is in style on both the right and the left today, among both the rich and the poor. In fact, this may be the first time in human history that every single demographic group has felt unfairly victimized simultaneously. And they’re all riding the highs of the moral indignation that comes along with it.

Right now, anyone who is offended about anything – whether it’s the fact that a book about racism was assigned in a university class, or that Christmas trees were banned at the local mall, or the fact that taxes were raised half a percent on investment funds – feels as though they’re being oppressed in some way and therefore deserve to be outraged and to have a certain amount of attention.

The current media environment both encourages and perpetuates these reactions because, after all, it’s good for business. The writer and media commentator Ryan Holiday refers to this as “outrage porn”: rather than report on real stories and real issues, the media find it much easier (and more profitable) to find something mildly offensive, broadcast it to a wide audience, generate outrage, and then broadcast that outrage back across the population in a way that outrages yet another part of the population. This triggers a kind of echo of bullshit pinging back and forth between two imaginary sides, meanwhile distracting everyone from real societal problems. It’s no wonder we’re more politically polarized than ever before.

The biggest problem with victimhood chic is that it sucks attention away from actual victims. It’s like the boy who cried wolf. The more people there are who proclaim themselves victims over tiny infractions, the harder it becomes to see who the real victims actually are.

In other words, just like everything else, persecution – when used as an indicator of general righteousness – is subject to Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure.”

Nevertheless, every group persists in using this tactic – because as far as they’re concerned, they really are the victims, and claiming such doesn’t undermine their credibility, it bolsters it. For them, claiming victimhood status reinforces their favored narrative that their side is the long-suffering hero of the story fighting against formidable odds. It helps them win sympathy among neutral observers, making it easier to recruit allies for their fight. And last but not least – and most worryingly of all – it provides a moral justification for them to take retaliatory action against their opponents.

(Quick footnote, by the way: Roy Baumeister has a chapter in his book Evil, in which he discusses how even violent criminals, wife-beaters, serial killers, and genocidal regimes often regard themselves as victims in a sense – justifying their actions not as spontaneous outbursts of violence, but as righteous retribution for the cruel wrongdoing that had previously been inflicted upon them by the people they would later end up targeting. The chapter is too long to include here, but I’ll provide this link to the relevant section and highly recommend that you check it out (in fact, the whole book is worth reading).)

VI.

This is the other side of the hero coin – not just that your group is being unjustly persecuted, but that you’re therefore justified in fighting back as aggressively as possible. As Baumeister writes:

One of the most pow­erful and universal human tendencies is to identify with a group of peo­ple similar to oneself, and to square off against rival groups. Moreover, people automatically and inevitably begin to think that their group is good. But if we are good, and you are our opponents, and evil is the oppo­nent of the good, then you must be evil. Groups of people everywhere will come to that same conclusion, even groups on opposite sides of the same conflict. The stronger the tendency to see one’s own group as good – and this tendency is often surprisingly strong – the more likely one is to regard one’s rivals and enemies as evil. Such views may then be used to provide easy justification for treating one’s enemies harshly, because there is no point in being patient, tolerant, and understanding when one is dealing with evil.

[…]

Victims will tend to see the people who have hurt them […] as committing wicked deeds for no valid reason and getting sadistic pleasure out of breaking rules or inflicting harm. To respond with zeal, maybe even going slightly beyond the extent of one’s original suffering, seems appropriate if one is dealing with an evil character. There is no sense in practicing forbearance, restraint, or mercy if one is dealing with a truly evil person.

Suppose Adolf Hitler came back from the dead, ran a red light, and bumped into the back of your car. Undoubtedly, you would want to sue him to the maximum extent allowed by law, and if your lawyer suggested exaggerating your injuries a little you would probably be willing to do so. (In contrast, if Gandhi came back and dented your fender, you’d be likely to forget the whole thing.) Extreme measures seem appropriate when one is retaliating against a thoroughly evil person. Unfortunately, because victims tend to see those who harm them in extreme terms, they will unusually be prone to think that extreme retaliations are appropriate.

And it isn’t just that we become more willing to indulge our nastier impulses when we’re dealing with some enemy we regard as evil; we’re also inclined to think more highly of ourselves for doing so. We as a society seem to have internalized the idea that our estimation of ourselves as people shouldn’t just be based on how much positive good we can do in the world, but also on how much righteous retribution we can inflict on those we consider evil. The most kickass heroes in our movies and TV shows (think Dirty Harry, The Punisher, etc.) are those who are most merciless toward evil and who inflict the most possible suffering on the bad guys. Ruthlessness – and even cruelty – are considered virtues when they’re used in service of a good cause. Sure, we pay lip service to the platitude that these things are bad – but the truth is that we generally act as though ruthlessness and cruelty are great as long as they’re directed at the right targets. Is there a news story about terrorists attacking our country? Let’s start fantasizing about all the nasty things we’d love to do to those sons of bitches after we hunt them down. Is there a movie about a loving father whose daughter is kidnapped? Let’s start puffing out our chests and boasting about how unstoppable our wrath would be if anyone ever dared to do such a thing to our children. I’m sure you’ve heard these kinds of conversations before. People feel compelled to talk this way, both by the strength of their moral convictions and by their desire to signal those convictions to others. After all, what would it say about a person if they didn’t see red and want to destroy anyone who dared to attack the people they cherish? Wouldn’t it mean that they didn’t cherish their loved ones very much after all?

The same kind of mentality has also become widespread when it comes to our conversations about ideas and values. The more indignant and outraged you get in the face of injustice (so the thinking goes), the more morally valiant you are. After all, what could be nobler than a person who stands up to the oppressors and shouts, “No, dammit, this is wrong!” and fights back with all the fury they can muster? According to this mindset, hating things that are bad makes you good, and your goodness is directly proportional to your hatred. And as an extension of this idea, the more indignant and outraged you can get towards wrongdoers – and ultimately, the nastier you can be towards them and the less willing to listen to their perspective – the more noble and moral you are. These days, we don’t just define our virtue in terms of our positive actions and the worthy causes we want to support; we’ve come to define our virtue more and more in terms of the things we have contempt for and want to destroy. As Wong writes:

Hating a bad thing does not make you good. […] The Klan hates ISIS, but we don’t count that as a point in their favor. Yet I’m pretty sure that most of what we consider being good in this culture is just having disdain for the right things.

Within this context, then, things like bullying, harassment, and mob punishment are completely fine as long as you’re bullying, harassing, and punishing the right people. People on social media are often proud of how vicious they can be toward the people they dislike; the word “savage” has literally become a compliment, used to applaud somebody for how brutally they can eviscerate their target. And if you’re ever tempted to try and understand the other side, much less treat them with empathy and respect, then that impulse must surely be suppressed in the name of beating them. After all, what would it say about a person if they didn’t see red and want to destroy anyone who dared to attack the values they cherish? Wouldn’t it mean that they didn’t cherish those values very much after all?

Social psychologists often talk about “sacred beliefs” – beliefs that are so meaningful to us that even the very suggestion of compromising on them is an unforgivable insult. Dagny explains:

One way to define the difference between a regular belief and a sacred belief is that people who hold sacred beliefs think it is morally wrong for anyone to question those beliefs. If someone does question those beliefs, they’re not just being stupid or even depraved, they’re actively doing violence. They might as well be kicking a puppy. When people hold sacred beliefs, there is no disagreement without animosity. In this mindset, people who disagreed with my views weren’t just wrong, they were awful people. I watched what people said closely, scanning for objectionable content. Any infraction reflected badly on your character, and too many might put you on my blacklist. Calling them ‘sacred beliefs’ is a nice way to put it. What I mean to say is that they are dogmas.

Treating certain beliefs in this way is how we end up with the kind of thing Alexander observes here:

I am constantly amazed by how small a buffer the average person has between “I don’t believe X” and “Believing X is irredeemably evil and we must mock and shame it until the very possibility of expressing it is beyond the pale”.

If you can convince yourself that having the right beliefs isn’t just a matter of knowing the right information, it’s a matter of being good or evil, then giving your ideological enemies the benefit of the doubt isn’t a matter of being civil or reasonable – it’s sympathizing with evildoers; and that makes you nearly as bad as they are. Here’s Baumeister again:

A powerful and important factor in idealistic evil is the attitude toward the victim. We have seen hints of this attitude already. Some perpetrators reported feeling guilty if they had any doubts or felt any sympathy toward the victims. Idealistic evil permits and sometimes even demands that its agents despise their victims.

The logic behind this attitude is built into the situation, and it is difficult to resist. If you think that you are doing something that is strongly on the side of the good, then whoever opposes you or blocks your work must be against the good – hence, evil. This conclusion is far more than just a convenient way of rationalizing one’s violence toward certain people. It is central to the idealist’s basic faith that he is doing the right thing. The enemies of the good are, almost by definition, evil. To perceive them as any less than that – to allow that one’s opponents have a legitimate point of view, for instance – is to diminish one’s own side’s claim to be good. One is not fighting the good fight if the enemy is good, too. Therefore, to sustain one’s own goodness, it is essential to see the enemy as evil.

Thus, idealism usually ends up conferring a right, a license, to hate. As we will see shortly, people do not generally need a great deal of urging to despise the groups that are arrayed against them, and so it would be too much to say that idealism is fully responsible for creating such hatred. But idealism does permit it. Once the collective understanding of good declares that it is correct to hate a certain category of others, people will readily oblige.

One consequence of this apparent duty is that it will tend to put the more extreme and fanatical members of the group into the positions of moral leadership. Consider the activist in the Ukraine [during the 1932-33 Stalinist terror-famine] who “let him have it hot and strong” when a colleague expressed some doubts or sympathy for the victims. The members with the firmest sense of hatred will end up being the ones that the others look to for support and guidance. Yet the activist who “let him have it” harbored his own private doubts, which suggests a very potent split between public statements and inner sentiments. He privately agreed with the other young man’s doubts about the legitimacy of the whole process, but what he said publicly was to have no pity.

The duty to hate continues to be a source of vexation in modern society, despite the society’s apparent commitment to tolerance, understanding, and moral relativism. For years, Americans felt comfortably entitled to be hostile toward Communists, from the Soviet and Chinese enemies who seemed ready to attack us with lethal weapons to home-grown Communists. The McCarthyist “Red scare” and persecutions of the 1950s emerged in part because it became safe and appropriate to direct hostility toward these internal enemies. Now, with the fall of the Soviet Union, one is no longer supposed to hate those poor Russians, and the adjustment is difficult for some dedicated American patriots.

Ironically, the very effort to tolerate and value diversity constitutes a license to hate those who disagree. One of the core paradoxes in the recent social evolution in the United States is how the broad desire to overcome prejudice and ethnic antagonisms has resulted in a society that seems more fragmented and prejudiced than ever. Each group firmly believes that it holds positive, inclusive, desirable values, and so other groups are gradually assumed to be inimical to these positive values. And if the other group is opposed to the good, then by definition it must be evil. Each group feels attacked by others, as in the current debate (as this is being written) on the future of affirmative action programs that support preferential hiring of members of disadvantaged groups. Each side perceives the others as selfishly and unfairly trying to benefit at its expense. In other words, both minorities and whites can see themselves as holding the values of fairness and equality and the other side as opposed to those values – and hence, wicked. Along the same lines, the 1995 World Conference on Women was held in China during the time this book was being written, and the American delegate Betty Friedan (author of The Feminine Mystique, one of the most influential works in the women’s movement) felt compelled to argue in a national publication against the conference’s strident and oppositional tendencies: “countering the hatred of women with a hatred of men” was a bad strategy, she emphasized, recognizing that such category antagonisms evolve all too readily.

In many cases, the consequence of one’s own presumptive goodness is more than a license to hate one’s opponents: It is a positive duty to hate them. Sometimes it is difficult to ascertain how much this matters, because people are often willing to hate without needing much encouragement. Still, when dealing with members of the group who might have doubts or otherwise lack sufficient feelings of animosity, the others may feel entitled to put pressure on them to get with the program and summon up the appropriate degree of hostility. If you do not hate Satan, then there is something wrong with you.

Jeffrey Burton Russell discussed a topic of medieval debate: “Are we to hate the Devil as much as we love Christ?” The answer was presumably no, but it was a close enough call to be worth debating. A good Christian’s emotional duties were said to include both hatred and love, directed toward the appropriate targets. The duty to love Christ is supreme, but the duty to hate the Devil may be almost as strong.

There is ample evidence that perpetrators of violence learn to detest their victims. Thus, state torturers are selected partly on the basis of their ideological purity, and they are taught that their victims are part of a dangerously powerful movement that aims to destroy their country. They learn (and one must assume that they are willing to accept) that their enemies in general are evil, and so even if they can see that the particular individuals they are torturing have little useful information to offer and are ultimately just human beings in pain, the torturers can still feel it is appropriate to make them suffer. These prisoners belong to the evil group.

Likewise, terrorists are generally fervent utopians who see “the establishment” (the government they oppose, and its supporters) as evil. President Clinton called the terrorists who bombed the Oklahoma City courthouse in 1995 “evil” for committing America’s worst act of terror. Yet to them, or at least to many people like them, the government is evil. Terrorist groups attract people who are hostile to authority. (This is ironic, because terrorist groups tend to be quite authoritarian, with the leader having close to absolute power in the group.) Terrorists, too, feel that pity for one’s victims is an unacceptable sign of weakness and a source of shame.

Terrorists have an interesting special problem of self-justification. They tend to choose random targets such as buses or public libraries, full of what most others regard as innocent victims. Terrorists cannot afford to accept that view, however. Acknowledging that the group really did kill innocent victims would undermine their faith in the goodness of their own cause. Hence, terrorists tend to adamantly reject the idea that those people are innocent. Sentiments such as “anyone who is not with us is against us” are popular with such groups, because they conveniently allow the group to despise all their victims as enemies. Likewise, terrorists tend to adopt very broad categories of enemies. If they regarded only the top government officials and the police as their enemies, they might find it difficult to avoid victimizing innocent people. But if they broaden the category to include anyone in the society who is not actively opposed to the government, then few innocents remain, and they can plant their bombs in public places without a guilty conscience.

These are extreme examples, of course. Usually the tendency to dehumanize your ideological enemies isn’t as dramatic as this; usually it’s just a casual bias that you indulge in because it makes your side feel more heroic and because it makes winning more satisfying. As Brennan writes in his discussion of partisan voting behaviors:

Since individual votes don’t matter and hating other people is fun, voters have every incentive to vote in ways that express tribal biases. […] In the voting booth, I can indulge the bigoted fantasy that, say, the Republicans oppose legalized abortion because they hate women, or that Democrats want to allow flag burning because they hate the United States.

Never mind whether these caricatures of your opponents’ positions are necessarily the most accurate; the fact is, it feels more rewarding to wage ideological warfare against someone who’s unambiguously evil than against someone who’s well-intentioned but merely misguided – so if you can convince yourself that your opponents are evil, then that’s what you’ll do. Beating someone is more fulfilling when they really deserve to be beaten.

The result, though, is the kind of ugliness described by commenter MikeHot-Pence after the election of Donald Trump in 2016:

[There is a] minority [of voters] that elected Trump [which] is giddy, and as best I can tell from personal interaction and reading commentary on here from Trump supporters, it’s because they’re getting to hurt liberals. They aren’t excited about positive changes in a classic sense, they’re giddy because they get to watch people suffer whom they believe deserve to suffer.

Wong notices the same thing, and points out the fact that it can just as easily go in the other direction as well:

There’s a huge difference between someone who voted for Trump because they believe lower corporate taxes spur employment and someone who only wanted a human hand grenade to put the hurt on those triggered libs.

If you really, really hate Trump, it will be very easy for some firebrand to come along promising to be the grenade thrown back in the other direction. That trend — voting only as an act of violence against a hated enemy — is a larger threat to the fabric of society than any individual policy. A leftist who wins with Trump tactics is like a corrupt cop framing a guy who by coincidence turned out to be guilty. Normalizing those tactics is worse for us in the long run, regardless of what happens to that one criminal.

As Yudkowsky points out, however, this pattern of voting based on hatred is basically already the norm:

[Alan] Abramowitz and [Steven] Webster found that what mainly predicted voting behavior wasn’t how much the voter liked their preferred party, but how much they disliked the opposing party. Essentially, the US has two major voting factions, “people who hate Red politicians” and “people who hate Blue politicians.”

This isn’t very encouraging. Hurting people is never something that should feel good, even if you think they deserve it. At best, hurting other people should feel like a painful but necessary evil that you only have to turn to as a last resort. With the situation now, though, people are looking to hurt each other as their first resort, without feeling bad about it at all. And when you’ve got both sides taking such an antagonistic approach to ideological debates, drawing their satisfaction from how much they can hurt each other, it makes it impossible to have productive conversations across ideological divides, because everyone is more interested in destroying their opponents than persuading them; they would rather wipe out their enemies than make them into allies. When both sides have convinced themselves that their opponents are so evil that they can’t be reasoned with or compromised with (you’ll often hear the term “animals” used to describe them in this context), then the thinking becomes that there’s no point in trying to be civil or rational towards the brutes – all you can do is put them down. Wong describes it this way:

There’s [this] eliminationist tendency that now seems baked into the culture. The goal is not to change minds or make incremental progress toward improvement, it’s to make the bad people vanish. Get them banned, get them fired, shut down their speaking engagement, declare victory.

[…]

You’re not trying to convert the enemy, or integrate them, or live with them, or compromise with them, even though virtually all problems in the real world are solved this way. [You] are satisfied with solutions that make them merely disappear.

He continues:

[Once you’ve made up your mind that your enemies are beyond reason, you’ll start believing that] every common courtesy granted to them is a self-inflicted wound, that every act of petty meanness is a victory, every cruel joke an act of courage, every misfortune on their side a cause for spiteful celebration. This is war […] which means all rules go out the window. Even though real war actually has lots of rules. Whatever.

This last point, about throwing out all the rules and going to war, is a particularly apt one right now. It has become increasingly popular for ideologically motivated people to decide that the customary rules for respectful human engagement shouldn’t apply to the particular conflict they’re involved in, on the basis that their enemies are so bad that they don’t qualify as legitimate actors deserving of respect. For all intents and purposes, what this means is that a lot of people no longer feel the need to concern themselves with things like fairly representing the other side’s viewpoints; if you can nail your opponents by misrepresenting their viewpoints, then by all means nail the bastards. If you can win by fighting dirty, then fight dirty. Evildoers aren’t entitled to fair treatment.

In practice, “fighting dirty” can take a few different forms. For instance, a lot of ideologues enter into conversations and debates not with the good-faith intention of gaining new insights and building bridges of understanding, but with the cynical bad-faith intention of trying to elicit condemnable statements from their opponents which can then be used against them later. Some people will even go out of their way to dredge through the entire history of everything their opponent has ever written or said, searching for some ill-considered quotation that can be used to make that opponent look bad. This is especially easy in our modern environment of ubiquitous technology and short attention spans, as Bret Stephens points out:

We live in the age of guilt by pull-quote, abetted by a combination of lazy journalism, gullible readership, missing context, and technologies that make our every ill-considered utterance instantly accessible and utterly indelible.

But even if your opponent doesn’t have any particularly damning statements on record, that doesn’t mean you can’t condemn them regardless. You just have to settle for the next best thing, which is to find some ambiguous statement they’ve made which could conceivably be interpreted as nefarious if you squint hard enough, and start loudly insisting that it definitely is nefarious and that it’s just as bad as if they had outright said the far worse thing that they really wanted to say. In other words, you find molehills and make them into mountains; you take your opponents’ words and overstate them in a way that allows you to denounce them. Alexander gives an example of this:

Back during the primary, Ted Cruz said he was against “New York values”.

A chump might figure that, being a Texan whose base is in the South and Midwest, he was making the usual condemnation of coastal elites and arugula-eating liberals that every other Republican has made before him, maybe with a special nod to the fact that his two most relevant opponents, Donald Trump and Hillary Clinton, were both from New York.

But sophisticated people immediately detected this as an “anti-Semitic dog whistle”, eg Cruz’s secret way of saying he hated Jews. Because, you see, there are many Jews in New York. By the clever strategem of using words that had nothing to do with Jews or hatred, he was able to effectively communicate his Jew-hatred to other anti-Semites without anyone else picking up on it.

Except of course the entire media, which seized upon it as a single mass. New York values is coded anti-Semitism. New York values is a classic anti-Semitic slur. New York values is an anti-Semitic comment. New York values is an anti-Semitic code word. New York values gets called out as anti-Semitism. My favorite is this article whose headline claims that Ted Cruz “confirmed” that he meant his New York values comment to refer to Jews; the “confirmation” turned out to be that he referred to Donald Trump as having “chutzpah”. It takes a lot of word-I-am-apparently-not-allowed-to-say to frame that as a “confirmation”.

Meanwhile, back in Realityville (population: 6), Ted Cruz was attending synagogue services at his campaign tour, talking about his deep love and respect for Judaism, and getting described as “a hero” in many parts of the Orthodox Jewish community” for his stance that “if you will not stand with Israel and the Jews, then I will not stand with you.”

But he once said “New York values”, so clearly all of this was just really really deep cover for his anti-Semitism.

[…]

Although dog whistles do exist, the dog whistle narrative has gone so far that it’s become detached from any meaningful referent. It went from people saying racist things, to people saying things that implied they were racist, to people saying the kind of things that sound like things that could imply they are racist even though nobody believes that they are actually implying that. Saying things that sound like dog whistles has itself become the crime worthy of condemnation, with little interest in whether they imply anything about the speaker or not.

Against this narrative, I propose a different one – [people’s] beliefs and plans are best predicted by what they say their beliefs and plans are, or possibly what beliefs and plans they’ve supported in the past, or by anything other than treating their words as a secret code and trying to use them to infer that their real beliefs and plans are diametrically opposite the beliefs and plans they keep insisting that they hold and have practiced for their entire lives.

As he admits, it’s true that sometimes people really do use coded language to provide cover for their more inflammatory opinions. But the problem with aggressively attacking everyone whose statements could conceivably be malicious is that you also end up catching a lot of well-intentioned people in the crossfire who just happened to make the mistake of using clumsy wording, or who didn’t understand enough of the socio-political context to realize why their statements might come off more negatively than they intended. Conor Friedersdorf puts it this way in his discussion of “concept creep,” showing how the phenomenon can apply equally readily to things like “language policing” and to actual policing:

Consider criminality, bullying, and racism. As fights against crime or bullying or racism intensify, crooks, bullies and racists try to hide their misdeeds; enforcers react – if a thief starts “innocently forgetting to pay,” a crackdown on the tactic is needed; if a bully starts kicking his victim under the table rather than punching him in the face, a definition of bullying as “open aggression” is shown to be flawed and insufficient; if racists no longer use racial slurs in public, but persist in using dog whistles, the latter are stigmatized. But efforts to encompass covert bad behavior tend to target increasingly minor acts, and more alarmingly, to rely on opaque or subjective assessments that capture some non-crooks, non-racists, and non-bullies. More innocents are thus searched or arrested or dubbed racists or bullies.

Invariably, this triggers a backlash and an ensuing debate that is muddled in a particular way. When critics of the criminal-justice system or progressive anti-racism suggest that society is now punishing some people wrongly or too severely, defenders of the status quo accuse them of acting as apologists for criminals or racists. The core of disagreement actually concerns whether concept creep has gone too far.

The antagonistic-minded ideologue’s solution, of course, is to just disregard these subtleties. Sure, maybe in some abstract intellectual sense there are various degrees of wrongness distinguishing the outright evildoers from those who simply don’t know any better; but when you’re at war, there’s no room for such distinctions. Even the slightest hint of wrong thinking must be stamped out swiftly and harshly as soon as it appears. And what this means in practice is that you mustn’t even make any attempt to identify maliciousness from your perceived opponents; you must simply assume maliciousness a priori. If you’re a conservative who notices liberals protesting against war or police brutality, you can’t even consider the possibility that their intentions are good and that they want to protect innocent people from being killed – you have to assume that they must be terrorist sympathizers who hate their country and the men and women who protect it. Or if you’re a liberal who notices a white child enthusiastically dressing up as someone from another race or culture, you can’t even consider the possibility that she admires that culture and wants to appreciate it – you have to assume that the child and her family are racial imperialists who want to selfishly usurp and despoil minority cultures. Likewise, if you’re a strong religious believer who hears about strangers who don’t share your beliefs, you can’t even consider the possibility that their disbelief might be based on honestly never having had a religious experience before, or on genuine doubts about whether the fantastical stories of scripture are really true – you have to assume that they must be well aware of the truth of your religion and are simply rebelling against God out of selfish egotism or a depraved desire to live in sin. And if you’re an atheist, you can’t even consider that religious people’s adherence to their doctrines might be based on deeply-held ideals and experiences – you have to assume that they’re all megalomaniacal theocrats.

No matter which side you’re on, this way of black-and-white thinking assumes the same form. The agnostic who does his best to live well, but has genuine misgivings about committing his life to something he’s never found compelling, is considered to be just as malicious as the person who goes out of their way to break all Ten Commandments and loves every second of it. The white child who wants to emulate her minority heroes is considered to be just as malicious as the mocking blackface caricatures of the 1800s. The anti-war protestors who rally to “Bring Our Sons and Daughters Home” are considered just as malicious as the ones who say “Thank God for Dead Soldiers.” All moral gradations and distinctions are ignored, because the question isn’t really about whether maliciousness is actually present – the act itself is a tripwire that must necessarily trigger a certain response. If the offending action is the type of thing that would surely be malicious if it were carried out by an ill-intentioned evildoer, then that makes it an evil action – and by extension, anyone who commits an evil action must therefore be an ill-intentioned evildoer themselves.

This is how relatively good-intentioned people can unwittingly find themselves being lumped in with skinheads, cop killers, devil worshipers, or totalitarians by their ideological opponents. Admittedly, some of this is just due to those opponents being confused about the difference between actions and intentions, and failing to make the distinction between the two. It’s also worth stressing that this kind of line-blurring can often happen simply as a result of certain unconscious pattern-matching instincts that everyone falls prey to at times. If, for instance, you see a bunch of radical feminists raving that all men are scum, and this leads you to develop a negative opinion of feminism, then even if the next person you encounter isn’t saying anything extreme at all but is simply (say) celebrating the anniversary of women’s suffrage, or some other perfectly appropriate thing, it might nevertheless rub you the wrong way just because it sounds feminist and feminism doesn’t sit well with you. Even if the other person is saying something that you’d actually agree with yourself in a vacuum, you may still find yourself thinking less of them simply because you’ve unconsciously pattern-matched what they’re saying to something else that you dislike. This is something we’ve probably all experienced ourselves at some point (maybe not with feminism, but with some issue or another) – but we can often snap ourselves out of it simply by becoming consciously aware of it and making an effort to take what the other person is saying or doing on its own terms, rather than judging by association. Still, though, there are unfortunately all too many cases nowadays of people blurring those lines on purpose, in order to avoid having to deal with those subtleties at all, so that they can just dismiss everyone on the other side as evil and get back to enjoying their superiority over them. Or if they’ve got a particular person they want to destroy, they may intentionally try to lump them together with something worse, knowing that if that something else is bad enough, it will trigger that pattern-matching instinct in other people and turn everyone against the target. As Wong writes:

The right has had great success equating the Black Lives Matter movement with rioters and cop killers — 57 percent of Americans have a negative view of the group. If one person in a crowd of thousands breaks a window, that’s all it takes.

[And many on the left] do the same. Anyone on the side of deregulation, tax cuts or cuts to social programs is technically on the same “side” as white nationalist terrorists. Well, there’s clearly no point in arguing with a skinhead who found a way to rhyme “genocide” in a chant, and that guy votes Republican, so clearly there’s no arguing with anyone who votes Republican.

It’s not hard to see how pushing this kind of absolutist line can backfire, of course. For one thing, when you’re as unforgiving toward someone who makes an innocent faux pas as you are toward genuine hate – when your tripwire is so sensitive that you end up punishing the undeserving as reflexively as the deserving – it makes it less likely that others will perceive your indignation as legitimate. Not only do you alienate the people you’re attacking (obviously), you convey the message that your hair-trigger is so indiscriminate that you’ll go off on anyone, regardless of their degree of guilt – and at that point, we’re right back to the whole “boy who cried wolf” dynamic all over again. People stop taking your outrage seriously and just start blowing you off.

Going back to our original point, though, about observing the rules of civil engagement, there’s also the broader fact that when you decide to just toss out all the rules and start demonizing, misquoting, and attacking your opponents to your heart’s content without any regard for their actual guilt or innocence, you upset the entire delicate balance of good faith that allows ideas to be exchanged in the first place. Our ability to make intellectual progress as a society depends entirely upon the fact that opposing ideological groups are able to, at least to some extent, collectively maintain a kind of unspoken truce based on observing the rules of respectful human discourse. But when one group decides to shift to a mindset of “Everyone needs to play by the rules EXCEPT US BECAUSE WE’RE RIGHT!” (as commenter therenegadepixie puts it), then the whole thing breaks down. Alexander elaborates:

We live in a tolerant liberal society, which means that in our society’s common state-sponsored areas, like legislation, national symbolism, and school policy, we have this thing going where we respect people who hold different opinions from ourselves. Even if they’re wrong.

In one sense, this is a sort of cease-fire between lots of different groups. Each of us would prefer if the apparatus of government was used to enforce/indoctrinate our own ideas to the exclusion of all others. But we know that in a world where everyone tried to do this, we’d have an equal chance of ending up as the persecutee rather than the persecutor, and there would be so much conflict that it would all end up much worse off than if no one tried to persecute anyone at all. Therefore, all the various groups with their various opinions agree to a cease-fire in which none of them try to persecute anyone else.

In another sense, this is simple epistemic modesty. It’s the sort of thing where a Catholic says “I think everyone else would be better off under a Catholic theocracy, in fact everything I believe tells me it’s practically certain. But the Muslims say they feel equally certain about an Islamic theocracy. I wouldn’t want the Muslims to create a theocracy with their current level of certainty, so I will follow my own principles and not create a Catholic theocracy even if I’m given the chance.”

These are both really, really smart ideas. They’re the reason why schools in heavily Republican districts aren’t supposed to blatantly indoctrinate the children with conservative ideas, and why schools in heavily Democratic districts aren’t supposed to blatantly indoctrinate the children with liberal ideas. They’re the reason Christians generally get successfully challenged in court whenever they do things like put up monuments to the Ten Commandments in courthouses even if the judge in the case is [themselves] a Christian, and why atheist teachers aren’t allowed to try to deconvert children in school. They’re a big chunk of the reason behind the First Amendment, too.

But part of having these rules is giving up some of your ability to judge other people, even obviously stupid people. The Jehovah’s Witnesses refuse to accept blood transfusions, even when their life is in danger. As a medical student, I would very much like to say “Well, that’s stupid, and this Jehovah’s Witness happens to be unconscious, so haha, [they] can’t stop me.” Because I am obeying the cease-fire and being epistemically modest, I don’t do so, even though I know with 100-epsilon percent probability that the Jehovah’s Witness is wrong. In exchange, when that Jehovah’s Witness is catering at an event, [they] include a vegeterian option for me even though [they] personally think vegetarianism is idiotic.

The point is that I shouldn’t say “Your belief is stupid, therefore it doesn’t matter”, even when someone else’s belief is stupid, unless I’m ready to completely demolish the whole system of cease-fires.

He illustrates this point further:

Suppose I am a radical Catholic who believes all Protestants deserve to die, and therefore go around killing Protestants. So far, so good.

Unfortunately, there might be some radical Protestants around who believe all Catholics deserve to die. If there weren’t before, there probably are now. So they go around killing Catholics, we’re both unhappy and/or dead, our economy tanks, hundreds of innocent people end up as collateral damage, and our country goes down the toilet.

So we make an agreement: I won’t kill any more Catholics, you don’t kill any more Protestants. The specific Irish example was called the Good Friday Agreement and the general case is called “civilization”.

So then I try to destroy the hated Protestants using the government. I go around trying to pass laws banning Protestant worship and preventing people from condemning Catholicism.

Unfortunately, maybe the next government in power is a Protestant government, and they pass laws banning Catholic worship and preventing people from condemning Protestantism. No one can securely practice their own religion, no one can learn about other religions, people are constantly plotting civil war, academic freedom is severely curtailed, and once again the country goes down the toilet.

So again we make an agreement. I won’t use the apparatus of government against Protestantism, you don’t use the apparatus of government against Catholicism. The specific American example is the First Amendment and the general case is called “liberalism”, or to be dramatic about it, “civilization 2.0”

Every case in which both sides agree to lay down their weapons and be nice to each other has corresponded to spectacular gains by both sides and a new era of human flourishing.

“Wait a second, no!” someone yells. “I see where you’re going with this. You’re going to say that agreeing not to spread malicious lies about each other would also be a civilized and beneficial system. Like maybe the Protestants could stop saying that the Catholics worshipped the Devil, and the Catholics could stop saying the Protestants hate the Virgin Mary, and they could both relax the whole thing about the Jews baking the blood of Christian children into their matzah.

“But your two examples were about contracts written on paper and enforced by the government. So maybe a ‘no malicious lies’ amendment to the Constitution would work if it were enforceable, which it isn’t, but just asking people to stop spreading malicious lies is doomed from the start. The Jews will no doubt spread lies against us, so if we stop spreading lies about them, all we’re doing is abandoning an effective weapon against a religion I personally know to be heathenish! Rationalists should win, so put the blood libel on the front page of every newspaper!”

Or, as [one critic pseudonymously referred to as] Andrew puts it:

Whether or not I use certain weapons has zero impact on whether or not those weapons are used against me, and people who think they do are either appealing to a kind of vague Kantian morality that I think is invalid or a specific kind of “honor among foes” that I think does not exist.

So let’s talk about how beneficial game-theoretic equilibria can come to exist even in the absence of centralized enforcers. I know of two main ways: reciprocal communitarianism, and divine grace.

Reciprocal communitarianism is probably how altruism evolved. Some mammal started running TIT-FOR-TAT, the program where you cooperate with anyone whom you expect to cooperate with you. Gradually you form a successful community of cooperators. The defectors either join your community and agree to play by your rules or get outcompeted.

Divine grace is more complicated. I was tempted to call it “spontaneous order” until I remembered the rationalist proverb that if you don’t understand something, you need to call it by a term that reminds you that don’t understand it or else you’ll think you’ve explained it when you’ve just named it.

But consider the following: I am a pro-choice atheist. When I lived in Ireland, one of my friends was a pro-life Christian. I thought she was responsible for the unnecessary suffering of millions of women. She thought I was responsible for killing millions of babies. And yet she invited me over to her house for dinner without poisoning the food. And I ate it, and thanked her, and sent her a nice card, without smashing all her china.

Please try not to be insufficiently surprised by this. Every time a Republican and a Democrat break bread together with good will, it is a miracle. It is an equilibrium as beneficial as civilization or liberalism, which developed in the total absence of any central enforcing authority.

When you look for these equilibria, there are lots and lots. Andrew says there is no “honor among foes”, but if you read the Iliad or any other account of ancient warfare, there is practically nothing but honor among foes, and it wasn’t generated by some sort of Homeric version of the Geneva Convention, it just sort of happened. During World War I, the English and Germans spontaneously got out of their trenches and celebrated Christmas together with each other, and on the sidelines Andrew was shouting “No! Stop celebrating Christmas! Quick, shoot them before they shoot you!” but they didn’t listen.

All I will say in way of explaining these miraculous equilibria is that they seem to have something to do with inheriting a cultural norm and not screwing it up.

[…]

I think most of our useful social norms exist through a combination of divine grace and reciprocal communitarianism. To some degree they arise spontaneously and are preserved by the honor system. To another degree, they are stronger or weaker in different groups, and the groups that enforce them are so much more pleasant than the groups that don’t that people are willing to go along.

[…]

I feel like we’ve got a good thing going, we’ve ratified our Platonic contract to be intellectually honest and charitable to each other, we are going about perma-cooperating in the Prisoner’s Dilemma and reaping gains from trade.

And then someone says “Except that of course regardless of all that I reserve the right to still use lies and insults and harassment and dark epistemology to spread [my side’s ideology]”. Sometimes they do this explicitly […] Other times they use a more nuanced argument like “Surely you didn’t think the same rules against lies and insults and harassment should apply to oppressed and privileged people, did you?” And other times they don’t say anything, but just show their true colors by reblogging an awful article with false statistics.

(and still other times they don’t do any of this and they are wonderful people whom I am glad to know)

But then someone else says “Well, if they get their exception, I deserve my exception,” and then someone else says “Well, if those two get exceptions, I’m out”, and you have no idea how difficult it is to successfully renegotiate the terms of a timeless Platonic contract that doesn’t literally exist.

No! I am Exception Nazi! NO EXCEPTION FOR YOU! Civilization didn’t conquer the world by forbidding you to murder your enemies unless they are actually unrighteous in which case go ahead and kill them all. Liberals didn’t give their lives in the battle against tyranny to end discrimination against all religions except Jansenism because seriously fuck Jansenists. Here we have built our Schelling fence and here we are defending it to the bitter end.

[…]

[The idea] that “Evil people are doing evil things, so we are justified in using any weapons we want to stop them, no matter how nasty” suffers from a certain flaw. Everyone believes their enemies are evil people doing evil things. If you’re a Nazi, you are just defending yourself, in a very proportionate manner, against the Vast Jewish Conspiracy To Destroy All Germans.

[…]

[The] principle – kind of a supercharged version of liberalism – of “It is not okay to use lies, insults, and harassment against people, even if it would help you enforce your preferred social norms” […] gets us a heck of a lot closer to [our vision for a better world] than [the] principle of “Go ahead and use lies, insults, and harassment if they are effective ways to enforce your preferred social norms.”

If you really care about trying to make the world a better place, then – if you’re really committed to carrying the torch of good – then upholding the rules of civil discourse is crucial. Again though, the problem is that too many people are acting in a way that puts a higher priority on mocking and humiliating the other side than on trying to find a common ground and make substantive progress with them. They’ve become less interested in destroying bad ideas than in destroying the people who hold them. More and more, their ways of seeing and interacting with the world – and each other – have become dominated by signaling and tribalism. And the result is that the classic “kitchen table discussions” – the ones where close friends or family members are just bouncing ideas off of each other in a freewheeling kind of way, not taking anyone’s points too personally or getting too emotionally worked up, but just comparing different ideas, trying them on for size, and seeing where they lead – are being supplanted by something more petty-minded, and more damaging. Instead of constructively exchanging ideas around the kitchen table – sometimes disagreeing but still hearing each other out and trying to understand each other – the kinds of conversations dominating our discourse today are more like two people standing on top of soapboxes on opposite sides of the street, raving about how wrong the other one is, and never actually communicating with each other directly at all. Instead of one two-way conversation, we’ve got two one-way conversations. People are more interested in grandstanding and pandering to their own side than trying to engage with the other side; and even when they do debate with the other side, this instinctive inclination toward grandstanding and pandering keeps the two sides from ever being able to meet each other on equal footing. They tend to just stick with the same standard talking points that they always use when talking to their own side, and as a result they simply end up talking past one another. As Tim Minchin puts it:

Most of society’s arguments are kept alive by a failure to acknowledge nuance. We tend to generate false dichotomies, then try to argue one point using two entirely different sets of assumptions, like two tennis players trying to win a match by hitting beautifully executed shots from either end of separate tennis courts.

It can often seem like we don’t even have true conversations at all these days – all we have are competing narratives. (Or, if we’re among allies, all we have are self-congratulatory back-patting sessions through which we reinforce our chosen narratives.) We talk at each other, and we talk past each other, but we don’t talk to each other – and we sure as hell don’t listen.

VII.

Part of the reason why things have recently turned so dramatically in this direction, I think, is that the way we consume and communicate information has itself changed so dramatically. The emergence of new forms of media – specifically cable news, the internet, and social media – has made it easier than ever for people’s opinions to become polarized; and as more and more people have started getting their information from these sources, many of them have become more and more convinced that there’s no longer any room for constructive debate, because they no longer believe that any of the issues they really care about are actually debatable at all. They’ve become convinced that their side is the only side, and that no legitimate alternative could even exist, much less be worth debating. And so the norms of public debate have unsurprisingly suffered as a result.

That isn’t to say that polarization never existed before cable news came along, of course, or that no one was ever insulated from differing opinions before the invention of the internet. In fact, in some ways, the opposite was true. If you were a typical American living before the age of modern media, most of the people you exchanged religious ideas with were probably just the people who went to the same church as you. Most of the political discussion you heard probably just came from sources like your dad ranting at the dinner table, or your local newspaper talking about the need to preserve the values of the local community. If you were a typical small-town factory worker, you might have only encountered a big-city liberal once in a blue moon; if you were a secular-minded urban professional, you might have only spent any extended time with evangelical conservatives at the occasional family reunion; and so on. In most cases, it would have been pretty standard for everyone you discussed politics or religion with to be more or less on the same page as you. And so most of the time you expressed an opinion on one of these topics, you probably wouldn’t have gotten much pushback on it; the people you were talking with probably would have just nodded along in agreement, and that would have been the end of it. Sure, you might have occasionally encountered somebody who really did disagree with you; but these occasions would have been rare enough that you wouldn’t really have taken them all that seriously and could just have blown them off as aberrations. The “kitchen table conversations” I mentioned a moment ago, even if they involved some disagreements, would have felt fairly innocuous and low-stakes. Maybe, on a purely logical level, you’d have understood from reading the newspapers that there were millions of strange people out there whose views really did differ dramatically from your own – “the Communists” or “the religious fundamentalists” or what have you – but without ever really interacting with them on a consistent basis, “the other side” would have felt like more of an abstraction, a distant enemy that you could wage ideological warfare against without ever having to actually come face-to-face with them. For the most part, you’d have been relatively insulated in your ideological bubble.

Once the internet came along, then, you’d think that this dynamic might have improved. All of a sudden, you were taking all these different populations from all these different walks of life, and putting them together in the same space – so that instead of just hearing one perspective, they could now hear every worldview under the sun, from those they already agreed with to those that flatly contradicted everything they’d been brought up to believe. Optimistically, you’d think that the different sides might have embraced the chance to compare their ideas and learn from each other. And it’s true that some people, to their great credit, actually have made the most of the internet’s potential in this way; they’ve seized this opportunity to learn more about opposing views, and are accordingly gaining a more accurate understanding of the world, filling in blind spots that they didn’t even realize they had before, and growing in their ability to engage constructively with those who disagree with them. Unfortunately though, most people’s preferred method of using the internet (and other modern forms of media like cable news) has been to do the exact opposite. Instead of using these outlets to expose themselves to different perspectives, their natural inclination has been to gravitate toward sources of information that seem the most sensible to them in light of what they already believe – i.e. sources that affirm whatever ideas and biases they already hold. This tendency is perfectly understandable, of course; it seems entirely rational that people should naturally gravitate more toward the sources of information that make the most sense to them. But the ultimate effect of this behavior is that it tends to be self-reinforcing: The more someone consumes a particular source of information affirming their already-held beliefs, the more it cements their conviction that those beliefs are obviously true, and the more it convinces them that a source of information can only be considered reliable if it aligns with those beliefs – which naturally causes them to gravitate toward such biased sources even more strongly, which reinforces their beliefs even further, and so on in a perpetual positive feedback loop. The end result is that people get more deeply immersed in their ideological bubbles than ever – with the liberals customizing their information feeds to give them a constant stream of information aligned with the liberal worldview, the conservatives customizing their feeds to give them a constant stream of information aligned with the conservative worldview, the religious people giving themselves a uniformly religious feed, the nonreligious people giving themselves a uniformly nonreligious feed, and so on. Mind you, that’s not to say that they’re just totally insulating themselves and cutting themselves off from the other side – on the contrary, thanks to the constant flow of information they’re now receiving, they’re seeing more of the other side than ever – but they’re making it so they see only the negative parts. They’re consuming more stories and articles than ever before about how their side is fighting the good fight while the other side is maniacally standing in the way at every turn for no reason other than sheer stupidity and/or evil. And because their ideological opponents have also gotten so polarized in this way themselves, it makes it so that on the occasions when they do get a chance to directly interact with those opponents (by showing up in the same public social media thread or getting curious enough to venture into the other side’s bubble and check out some discussion threads there or whatever), they’ll typically only encounter the most aggressively irrational and tribalistic of them – since those are the ones who will tend to be the most vocal and the most eager to grandstand to their own base with over-the-top ideological stances and assertions of out-group contempt – providing themselves with still more confirmatory evidence that the other side must just be a bunch of lunatics. (The way Twitter is designed makes it especially conducive to these kinds of jarring collisions between otherwise-segregated subcultures; see Tanner Greer’s insightful post on the subject here.) The more confirmatory evidence they encounter, the more certain they become that they’re really onto something with their own beliefs. They have to be right, because just look at how insane and awful the people who disagree with them are.

And to be clear, these different groups (for the most part) aren’t just brainwashing themselves with phony propaganda and made-up anecdotes. If all the stories they were consuming were totally fabricated, that would make matters a good bit simpler – because it would be possible to falsify the stories, discredit the sources, and move on (at least in theory). But things are more complicated than that. The real problem is confirmation bias – the subconscious tendency we all have to only listen to information that supports our beliefs, while ignoring information that challenges it. As commenter bcglorf explains:

You can get an Alt-Right website that does nothing but post 100% accurate, verified true stories. You can even have them stick to the facts and stay away from any editorialising within their reporting. If they then proceed to exclusively and only report stories about violent crime by non-white or non-Christian minorities, they would have loads of content from across the country to publish every day.

Alexander elaborates with an analogy:

Given the recent discussion of media bias here, I wanted to bring up Alyssa Vance’s “Chinese robber fallacy”, which she describes as:

..where you use a generic problem to attack a specific person or group, even though other groups have the problem just as much (or even more so).

For example, if you don’t like Chinese people, you can find some story of a Chinese person robbing someone, and claim that means there’s a big social problem with Chinese people being robbers.

I originally didn’t find this too interesting. It sounds like the same idea as plain old stereotyping, something we think about often and are carefully warned to avoid.

But after re-reading the post, I think the argument is more complex. There are over a billion Chinese people. If even one in a thousand is a robber, you can provide one million examples of Chinese robbers to appease the doubters. Most people think of stereotyping as “Here’s one example I heard of where the out-group does something bad,” and then you correct it with “But we can’t generalize about an entire group just from one example!” It’s less obvious that you may be able to provide literally one million examples of your false stereotype and still have it be a false stereotype. If you spend twelve hours a day on the task and can describe one crime every ten seconds, you can spend four months doing nothing but providing examples of burglarous Chinese – and still have absolutely no point.

If we’re really concerned about media bias, we need to think about Chinese Robber Fallacy as one of the media’s strongest weapons. There are lots of people – 300 million in America alone. No matter what point the media wants to make, there will be hundreds of salient examples. No matter how low-probability their outcome of interest is, they will never have to stop covering it if they don’t want to.

He continues:

In a country of 300 million people, every single day there is going to be an example of something hideously biased against every single group, and proponents of those groups have formed effective machines to publicize the most outrageous examples in order to “confirm” their claims of bravery. I had an interesting discussion on Rebecca Hamilton’s blog about the Stomp Jesus incident. You probably never heard of this, but in the conservative Christian community it was a huge deal; Google gives 20,500 results for the phrase “stomp Jesus” in quotation marks, including up-to-date coverage from a bunch of big conservative blogs, news outlets, and forums. I guarantee that the readers of those blogs and forums are constantly fed salient examples of conservatives being oppressed and persecuted. And I don’t mean “can’t put up ten commandments in school”, I mean armed gay rights activist breaks into Family Research Council headquarters and starts shooting people for opposing homosexuality. Imagine you hear a story in this genre almost every time you open your RSS feed.

(And now consider all the stories you hear every day about violence and harassment against your people in your RSS feed.)

And if there aren’t enough shooters, someone is saying something despicable on Twitter pretty much every minute. The genre of “we know the world is against us because of five cherry-picked quotes from Twitter” is alive, well, and shaping people’s perceptions. Here’s an atheist blog trawling Twitter for horrible comments blaming atheists for terrorism, and here’s an article on the tweets Brad Pitt’s mother got for writing an editorial supporting Romney (including such gems as “Brad Pitt’s mom wrote an anti-gay pro-Romney editorial. Kill the b––.”)

Then we get into more subtle forms of selection bias. Looking at [various articles around the internet], I am totally willing to believe newspapers are more likely to blaspheme Jesus than Mohammed, and also that newspapers are more likely to call a Muslim criminal a “terrorist” than they would a Christian criminal. Depending on your side, you can focus on one or the other of those statements and use it to prove the broader statement that “the media is biased against Christians/Muslims in favor of Muslims/Christians”. Or you can focus on one part of society in particular being against you – for leftists, the corporations; for rightists, the universities – and if you exaggerate their power and use them as a proxy for society then you can say society is against you. Or as a last resort you can focus on only one side of the divide between social and structural power.

It’s not so much a question of whether false information is being perpetuated (although to be sure, this is often a problem in its own right). The big issue here is that certain information is being selectively presented, in a way that supports one side’s narrative, while opposing information is conveniently overlooked or ignored. Audiences may think they’re getting both sides of a story – but what they’re actually seeing is the best of one side and the worst of the other. As the Chinese Robber Fallacy demonstrates, it’s all too easy for anyone who wants to push a particular narrative to comb through Twitter, pick out three or four inflammatory tweets from the dumbest members of their opposing tribe, and then claim that these isolated examples represent the other side in its entirety and therefore constitute proof of its evil. This can seem very compelling to anyone who wants to believe it, because after all, the tweets they’re reading are real. It simply doesn’t occur to them to consider the 99.9% of the story that they might not be seeing. Manson puts it this way:

When all information is freely available at the click of a mouse, our attention naturally nosedives into the sickest and most grotesque we can find. And the sickest and most grotesque similarly finds its way to the top of the nation’s consciousness, dominating our attention and the news cycle, dividing and recruiting us into its ever more polarized camps.

We become only exposed to the most extreme negative aspects of certain groups of people, giving us a skewed view of how other people in the world really think, act, and live. When we are exposed to police, we only see the worst 0.1% of police. When we are exposed to poor African Americans, we’re only exposed to the worst 0.1% of poor African Americans. When we’re exposed to Muslim immigrants, we only hear about the worst 0.1% of Muslim immigrants. When we are exposed to chauvinist, shitty white men, we’re only exposed to the worst 0.1%, and when we’re exposed to angry and entitled social justice warriors, we’re only exposed to the worst 0.1%.

As a result, it feels as though everyone is an angry fucking extremist and is full of hate and violence and the world is coming undone thread by thread, when the truth is that most of the population occupies a silent middle ground and is actually probably not in so much disagreement with one another.

We demonize each other. We judge groups of people by their weakest and most depraved members. And to protect ourselves from the overreaching judgments of others, we consolidate into our own clans and tribes, we take refuge in our own precious identity politics and we buy more and more into a worldview that is disconnected from cold data and hard facts.

Alex Schmidt adds a similar perspective in a podcast discussion with Wong, Katie Goldin, and Jack O’Brien:

One of the scariest things to me […] is I feel like most people don’t realize that there are entire segments of our population that they only understand through “outrage stories” about something. There are a lot of people who I think only have a concept of feminists and feminism through angry stories about feminism, or about cherry-picked and misleading and overblown examples of someone practicing or promoting the idea of feminism. And then there are probably also people who have only come into contact with evangelical Christians that way, only come into contact with Muslims that way, and that’s incredibly dangerous that that’s how a lot of people understand a lot of other people who they live in the same country as.

If these types of stories are all you ever see, then you might easily wind up thinking that half the people in this country are card-carrying fascists, or card-carrying communists – or just that they’re completely unhinged and incapable of acting rationally. But if you do convince yourself of this, then your response will be accordingly disproportionate – making you an unwitting part of the problem yourself.

That’s one of the most ironic parts of all this; as Manson pointed out earlier, a lot of the outrage coming from each side is just people getting angry at how outraged the other side is. One side will get upset about something like holiday decorations (e.g. the left getting upset at culturally insensitive Halloween costumes, or the right getting upset at Starbucks cups that say “Happy Holidays” instead of “Merry Christmas”), and then the other side will get upset about how upset the first side is getting over such a ridiculous issue, and the outrage from each side will just build on itself until everyone is at each other’s throats and they’re all using each other’s hostility as proof of how incapable the other side is of being rational. Even if both sides are originally coming from places of relative reasonableness, the mere perception that there’s so much unreasonableness between them can become a self-fulfilling prophecy by dominating the narrative and hindering their ability to have meaningful conversations about anything else. When everyone is so preoccupied with their “Can you believe what those nuts on the other side said this week?”-type stories, it becomes far more difficult to make any kind of ideological progress, because there’s no room left to talk about the actual ideas.

The podcast conversation continues (somewhat paraphrased for clarity):

GOLDIN: It’s like there’s this culture of outrage that’s based on perceived outrage culture; it’s like an outrage ouroboros. You’ll have the National Review having a list of “the most outrageous times colleges were PC,” but all of the examples are either really overblown, or when you look into them they’re not that bad. So it’s like a snake eating its own tail; it’s one side being outraged at what they think the other side is outraged about, and then it’s just kind of like a self-feeding cycle of anger.

[…]

WONG: Well, in my whole life, I think for every one time I’ve read an actual trigger warning, I’ve read probably 50,000 jokes and snide comments about how trigger warnings are everywhere. Like, it’s a purely fictional thing that in college, “Well there’s just trigger warnings everywhere, you can’t say anything, you can’t do anything.” They’ve just invented this. And it’s the same thing with, say, veganism. I’ve never in my life – I’ve known a lot of vegetarians, I know several vegans – I’ve never ever run into a pushy vegan in my life. But if you get into a crowd of people on the right talking about vegans, they all have this fantasy that, “Well if you’re ever at a party with a vegan, they’ll throw the meat in the trash and they’ll make everyone stop eating and they’ll just have all these demands.” But I’ve never met that person. I’ve never run into that person in my life. But within that bubble, it’s only outrage stories about some vegan protesting something, or a vegan going to a wedding and demanding to have a separate menu or whatever. It’s like, you build a fake, hateful caricature of the other side. But then conversely, it’s very hard to convince people on the left that they’ve built up that same thing for the right. Because the discussions about how to reform healthcare, for example, when you get down to the level of actual experts, the actual people having to write the policy, the actual people with financial stakes in it, the insurance companies, pharmaceutical companies, hospitals, all the players – there are these really hard choices that people are going to have to try to make, and there’s this crisis that the cost of health care is going up at something like five times the rate of inflation, and simultaneously the country is aging, so people need more and more care, and there’s not enough money to go around. If the costs keep going up, we can’t afford to give everybody the health care they need. On one level, there are people trying to figure out some terrible compromise, or what solution has to be made here. On the other level, there’s just this glib, snide attitude toward anyone who wants to change the rules, that’s just, “Well, you just want to see poor people die. You just want to see sick people die. You’re just heartless.” And it’s like, that’s a cartoon character. The idea that there’s an entire political party that doesn’t feel anything for sick people is absurd. But if you’ve carefully only filtered the most heartless comments, if you’ve carefully only filtered the shrillest of pundits – because I don’t doubt, somewhere right now Rush Limbaugh is probably saying, “Who cares if the poor people die, they’re not doing anything anyway” – if you use that as your representation of the debate that’s being had, you are uninformed. You’re never going to become clear on what the actual issue is, which is that costs are out of control, and no one has any idea of how to fix it in a way that will make everyone happy. But it’s so much easier to engage it as white versus black, good versus evil kind of thing: “We are pure and good because we want to give sick people medicine, and they’re pure black and evil because they want to deprive sick people of medicine.” You’re more ignorant than someone who’s illiterate, who can’t even read about the conflict in the first place, because not only are you uninformed, but you’ve created a cartoon version of the conflict in your mind that has nothing to do with reality.

O’BRIEN: Right, you’re uninformed and extremely confident in the idea that you are informed.

GOLDIN: Especially when you assume that regular people who are Republicans or Democrats want to hurt the majority of people. Like, someone can be wrong, but they’re not stupid or evil necessarily. You can still disagree with how to go about doing something, but the assumption that everyone who voted for Trump is just doing it to hurt liberals is not true at all.

Unfortunately, within this kind of ideological environment, it’s not hard to see how two people with mutually incompatible worldviews could each claim to be 100% certain not only that they’re right and their opponent is wrong, but also that their opponent really is stupid and/or evil as well. If all they ever see is an endless stream of stories showcasing the other side’s wickedness and stupidity, then how could they not come to the conclusion that the other side is utterly bankrupt of good ideas?

The truth is, nobody actually considers themselves to be living in an ideological bubble – they just take it for granted that they’re getting all the objective facts, and hey, it’s not their fault that all the objective facts say that their side is great and the other side is terrible. They might not even consider themselves to belong to a “side” or a “group” at all – their “side” is just “people who know what’s really going on in the world and behave like rational adults,” and the “other side” is just “people who don’t.” As far as they’re concerned, their view of reality is the only one that isn’t biased – and if other people can’t see that, well, that’s their problem.

VIII.

It should come as no surprise, then, that this kind of extreme ideological self-segregation creates such a toxic environment for inter-group discourse. When people are only interested in hearing information that reinforces what they already believe, all there’s room for is intra-group discourse. Not only do ideological partisans not understand what the other side really stands for, they don’t even want to understand; they would rather signal to fellow members of their group how virtuous can be by publicly rejecting the enemy’s lies. As Alexander writes:

Imagine if you actually tried to do a really good job arguing with a Klansman. You read some KKK literature to try to find out where he’s coming from. Then you try to get into his mind, think like him, and maybe try to incrementally convince him that a few of his less tenable points were wrong to begin with.

It might look something like “You know how in 1967, Grand Wizard Jones declared that all minorities were stupid? Doesn’t that conflict with the existence of various minority doctors and lawyers and scientists? Do you think maybe that, even if some minorities are stupid, there might be others who are actually just as smart as white people?”

This wouldn’t signal to your friends that you were going above and beyond in your efforts to argue against the Klansman. It would signal that you were sympathetic, that you could see where he was coming from, that maybe you’re racist yourself.

On the other hand, the dumber and louder and more strident an argument you make, the more it signals how much you hate him and how little you respect him.

You’re probably well-acquainted with this if you spend a lot of time on social media sites or watch a lot of cable news shows. The standard approach nowadays is to spend as little time as possible addressing your opponent’s actual ideas and arguments, and as much time as possible expressing how much contempt you have for them – mocking them for every faux pas, denouncing them for every perceived misdeed (bonus points if you can use the word “sickening” or “disgusting”), and just generally spitting on them at every opportunity. There isn’t any actual conversation involved; it’s just competitive sneering. The more you can put their side down, the more it feels like you’re pushing your side up, as Yudkowsky puts it; by looking down on others, you’re elevating yourself. Balioc refers to this phenomenon as “ritualized performative hate,” a phrasing that I think sums it up pretty well.

It’s not just a signaling thing, though. As Alexander points out, there’s also a certain tactical rationale for being able to dismiss your opponents as “too wrong to even be worth talking to” (similar to what we were talking about before with trying to intentionally lump your opponents together with more radical extremists in order to marginalize them):

Social shaming […] isn’t an argument. It’s a demand for listeners to place someone outside the boundary of people who deserve to be heard; to classify them as so repugnant that arguing with them is only dignifying them. If it works, supporting one side of an argument imposes so much reputational cost that only a few weirdos dare to do it, it sinks outside the Overton Window, and the other side wins by default.

[Examples:]

“I can’t believe it’s 2018 and we’re still letting transphobes on this forum.”

“Just another purple-haired SJW snowflake who thinks all disagreement is oppression.”

“Really, do conservatives have any consistent beliefs other than hating black people and wanting the poor to starve?”

“I see we’ve got a Silicon Valley techbro STEMlord autist here.”

Nobody expects this to convince anyone. That’s why I don’t like the term “ad hominem”, which implies that shamers are idiots who are too stupid to realize that calling someone names doesn’t refute their point. That’s not the problem. People who use this strategy know exactly what they’re doing and are often quite successful. The goal is not to convince their opponents, or even to hurt their opponent’s feelings, but to demonstrate social norms to bystanders. If you condescendingly advise people that ad hominem isn’t logically valid, you’re missing the point.

Sometimes the shaming works on a society-wide level. More often, it’s an attempt to claim a certain space, kind of like the intellectual equivalent of a gang sign. If the Jets can graffiti “FUCK THE SHARKS” on a certain bridge, but the Sharks can’t get away with graffiting “NO ACTUALLY FUCK THE JETS” on the same bridge, then almost by definition that bridge is in the Jets’ territory. This is part of the process that creates polarization and echo chambers. If you see an attempt at social shaming and feel triggered, that’s the second-best result from the perspective of the person who put it up. The best result is that you never went into that space at all. This isn’t just about keeping conservatives out of socialist spaces. It’s also about defining what kind of socialist the socialist space is for, and what kind of ideas good socialists are or aren’t allowed to hold.

I think easily 90% of online discussion is of this form right now, including some long and carefully-written thinkpieces with lots of citations. The point isn’t that it literally uses the word “fuck”, the point is that the active ingredient isn’t persuasiveness, it’s the ability to make some people feel like they’re suffering social costs for their opinion. Even really good arguments that are persuasive can be used this way if someone links them on Facebook with “This is why I keep saying Democrats are dumb” underneath it.

When this strategy is in full swing, as Alexander’s Klansman analogy illustrates, it can feel like a perfectly good substitute for – or even a superior alternative to – ever having to consider opposing ideas on their actual merits. It’s not particularly uncommon to encounter people who refuse to even think about opposing ideas for long enough to come up with legitimate rebuttals against them – because why would they need to, when they can just relentlessly shame and mock their opponents into submission instead? If you’re able to get your opponents’ views banned, or marginalize them to such an extent that no one is willing to entertain them anymore, then there’s no need to actually refute them – or so the thinking goes. You can just use condescending derision in place of argument. The odd result of this tactic, though, is that when you challenge its practitioners on their own ideas, they can’t even mount a reasonable defense of them, because they’ve never considered it necessary. Fredrik deBoer describes occasions when he’s encountered such behavior himself:

[These] people […] won’t attempt to answer [hard] questions [that challenge their view]. My experience suggests instead that they will roll their eyes, dismiss them, and act as though the answers to them are settled and obvious, even though different people within that group so often answer them in flatly contradictory ways. That’s because there’s no there there.

It’s worth noting, by the way, that when you see these kinds of attempts to marginalize opposing views, they don’t always take the form of someone angrily demanding that their enemies’ evil ideas be banned outright, or indignantly huffing and puffing with righteous fury until they’re red in the face. Most ideologues recognize that trying to delegitimize opposing ideas by painting them as evil is an uphill battle; so a lot of times their preferred approach will instead be to try to delegitimize them through methods like sarcasm and ridicule. They won’t try to convince you their opponents are evil; they’ll try to convince you they’re laughably stupid.

Alexander gives a few examples:

A Christian-turned-atheist once described an “apologetics” group at his old church. The pastor would bring in a simplified straw-man version of a common atheist argument, they’d take turns mocking it (“Oh my god, he said that monkeys can give birth to humans! That’s hilarious!”) and then they’d all have a good laugh together. Later, when they met an actual atheist who was trying to explain evolution to them, they wouldn’t sit and evaluate it dispassionately. They’d pattern-match back to the ridiculous argument they heard at church, and instead of listening they’d be thinking “Hahaha, atheists really are that hilariously stupid!”

Of course, it’s not only Christians who do that. I hear atheists repeat the old “I believe the Bible because God said it was true. We know He said it was true because it’s in the Bible. And I believe the Bible because God said it is true” line constantly and grin as if they’ve said something knee-slappingly funny. I’ve never in my entire life heard a Christian use this reasoning. I have heard Christians use the “truth-telling thing” argument sometimes (we should believe the Bible because the Bible is correct about many things that can be proven independently, this vouches for the veracity of the whole book, and therefore we should believe it even when it can’t be independently proven) many times. If you’re familiar enough with the atheist version, and uncharitable enough to Christians, you will pattern-match, miss the subtle difference, and be thinking “Hahaha, Christians really are as hilariously stupid as all my atheist friends say!”

Sometimes even the straw-man argument is unnecessary. All you need to do is get in a group and make the other side’s argument a figure of fun.

There are lots of good arguments against libertarianism. I have collected some of them into a very long document which remains the most popular thing I’ve ever written. But when I hear liberals discuss libertarianism, they very often head in the same direction. They make a silly face and say “Durned guv’mint needs to stay off my land!” And then all the other liberals who are with them laugh uproariously. And then when a real libertarian shows up and makes a real libertarian argument, a liberal will adopt his posture, try to mimic his tone of voice, and say “Durned guv’ment needs to stay off my land! Hahaha!” And all the other liberals will think “Hahaha, libertarians really are that stupid!”

These are only a few examples; nowadays, this kind of thing is everywhere. Part of the reason why it’s become so popular, I suppose, is the fact that satirical TV shows and wisecracking internet pundits have had so much success using it. Who wouldn’t want to emulate the way shows like The Daily Show and South Park so masterfully take down their targets, or the way the most popular Twitter personalities so brilliantly skewer theirs? But the fact that so many people follow this line of thinking means that everywhere you look now, the go-to method for handling people you disagree with is just to laugh at everything they say – as if the idea of someone disagreeing with your ideology is something you just find hilarious for some reason. It doesn’t matter whether what they’re saying is actually funny in any way, of course; if someone actually disagreed with you because they really were significantly less intelligent, you probably wouldn’t find that funny at all. Laughing at someone less intelligent than you and calling them stupid is the kind of thing a bully would do. But the point isn’t whether you actually think they’re stupid; your laughter and mockery is simply intended to portray them that way. Your goal is to signal that your opinions are so overwhelmingly obvious that anyone who disagrees must be an utter buffoon and therefore worthy of ridicule. If you can do that, then you don’t have to go to all the work of actually reviewing the evidence, explaining your justifications, and winning the argument on the merits. You just win by creating the false impression that the issue is already settled.

It’s a bit like how certain animals puff themselves up to make themselves look bigger and deter predators. They may not actually be able to win in a fight, but if they can make themselves look like they would, then that can save them from having to ever do any actual fighting in the first place. This is why you see this kind of tactic used even in contexts where the people using it don’t have any good basis whatsoever for feeling smugly superior – the fact that they don’t have a particularly compelling case is precisely why they need to try and win the argument through mockery and derision instead. If you ever get a chance to poke your head into a flat-earth forum or a conspiracy theorist website or some other ultra-fringe corner of the internet (and I highly recommend doing so just so you can see what I’m talking about for yourself), you’ll find that the commenters there always display just as much casual certainty in their beliefs – and just as much snide incredulity that anyone could be idiotic enough to believe otherwise – as any other opinionated community. The concept of ideological humility – of being open to the possibility that you might be wrong and desiring to change your beliefs if so – is passé in these kinds of echo chambers. Whether we’re talking about fringe conspiracy theories or mainstream religious, political, and scientific debates, the general attitude seems to be that the best way to communicate is through scoffing at the other side. And the people who are best at it – the most obnoxious trolls and the most spiteful pundits – are the ones who are the most celebrated by their own side.

This is another aspect of the whole public mockery mindset that’s worth highlighting. There’s this widespread trend nowadays, where any time some public figure makes a big show of treating those who disagree with them with sarcastic contempt, their fans don’t think any less of them for it – in fact, they act like it’s a great thing and cheer them on. Whether it’s a political provocateur like Ann Coulter or Milo Yiannopoulos, a regular celebrity like Nicki Minaj or Steve Jobs, or even a fictional character like Dr. House from House or Rick from Rick and Morty, the more unrepentantly dickish remarks these people make to those around them – the more straight-up mean they are – the more their fans applaud them for it. If you’ve even seen someone tweet something like “Oh my God they just don’t give a fuck, hahaha I love it,” you know what I mean. “Not giving a fuck,” “not caring who you upset” – these kinds of things aren’t considered character flaws, or signs that someone is just being an asshole; they’re considered positive features. And the same attitude applies to ideological discussions between regular people as well. It can sometimes be hard to initiate an earnest, respectful discussion with someone you disagree with, because for a lot of people, the only reason to want to debate someone you disagree with in the first place is that you get to enjoy beating and humiliating them. Civil discourse isn’t fun; sticking it to those pinheads on the other side is fun.

We all know what it’s like to get caught up in this kind of mentality. Let’s face it – if you’ve ever gotten into an argument over an issue you felt strongly about (especially online), you know how good it can feel to “own” your opponent and demonstrate your superiority over them. When you see some bozo on the internet arrogantly spouting nonsense and you decide to roll up your sleeves and put them in their place, it doesn’t feel like you’re picking on them or being belligerent (especially considering that your side is the persecuted underdog). When you nail them with that perfect comeback, rendering them unable to even sputter a feeble defense, and leaving onlookers in awe at how masterfully you’ve shut them down, it just feels delicious. (Never mind that this isn’t how these things typically go, of course – usually the other person just fires back a retort of their own and mentally congratulates themselves for owning you – but no matter.) You aren’t really concerned with trying to sit down with your opponent and patiently work through a difficult conversation in order to get down to the fundamental source of your disagreement and perhaps reach a common understanding; you’re just enjoying the satisfaction of taking down an idiot who deserves to be taken down.

Needless to say, though, when you’re so reckless in your disregard for other people that you stop “giving a fuck” whether you’re doing more harm than good, the unsurprising result is that, well, you tend to do more harm than good. Recent history is full of stories of people having their lives ruined by mobs of keyboard vigilantes who saw an opportunity to engage in a little recreational outrage and didn’t really care whether they might be getting something wrong or piling on more than the person deserved. From the perspective of each individual in one of these mobs, it probably didn’t feel like what they were doing was a big deal; they might not have even felt like they were part of a mob at all. They were just one person having a bit of fun on their social media account – just letting off a little steam towards something that annoyed them – so how could they be blamed for the cumulative effect of thousands of other people simultaneously doing the same thing? And in a sense, they’re right – each individual person’s contribution to the overall mob effect tends to be fairly minimal in most of these cases. If anyone who made some misstep in public only had to face the disapproval of one anonymous stranger, it probably wouldn’t be that big a deal. But of course, the nature of group signaling and tribalism ensure that these kinds of things are never limited to just one person; and the way that social media technology is designed to maximize sharing and reposting only amplifies these effects. The process of identifying someone for derision and then heaping scorn onto them is a group activity – they call it “dogpiling” for a reason – and the more people are involved, the more severe the effect is. No one has to feel guilty for their own individual contribution, because there are so many other people in the mob that it wouldn’t make any difference if they declined to participate. But you know how the old saying goes: No individual raindrop ever feels responsible for the flood.

I think this collective perception, this feeling of not having any real, tangible connection with the consequences of your beliefs, plays a big part in explaining the lack of seriousness with which so many people approach these ideological debates. There’s this sense that although the issues being discussed may be big ones, the stakes aren’t actually real for the people discussing them; it’s like they’re just gambling with play money. If you actually took one of these knee-jerk “don’t give a fuck” types and put them in a situation where the individual stakes for them really were especially high, then it might be a different story. If, for example, they woke up one morning to find that they’d been made President of the United States, and suddenly they were in charge of making all the big decisions that affected the lives of millions of people, it’s likely that (in most cases) they’d instantly become a lot less flippant and sloppy with their reasoning, and would adopt a much more cautious, circumspect approach, taking measures to consult with all the most respected experts and trying to figure out what the right answers really were before making any public statements or decisions. Or to take another hypothetical, if you took someone who loved getting into religious arguments and you put them in a situation where they were suddenly faced with a medical diagnosis that they only had one week left to live, odds are they’d instantly become a lot less glib and dismissive toward people whose belief in an afterlife (or lack thereof) differed from their own. But because people so seldom find themselves in situations where their personal stakes really are that high, most of them don’t mind indulging in a little bit of biased reasoning in order to preserve their beliefs and win their arguments. There’s just not enough of a real penalty for being wrong. As Brennan writes in his discussion of voting behavior:

In our day-to-day lives, we tend to get punished for being epistemically irrational. If you think looks are all that matters in a mate, you’ll have a string of bad relationships. A person who indulges the belief that buying penny stocks is key to financial success will lose money. The Christian Scientist who indulges the belief that pneumonia can be cured by prayer might watch their children die. And so on. So reality tends to discipline us into thinking more rationally about these things.

Unfortunately, in politics, our individual political influence is so low that we can afford to indulge biases and irrational political beliefs. It takes time and effort to overcome our biases. Yet most citizens don’t invest the effort to be rational about politics because rationality doesn’t pay.

Suppose, for the sake of argument, that Marxist economic theory is false. Imagine that electing a Marxist candidate would be an absolute disaster – it would destroy the economy, and lead to wide-spread death and suffering. But now suppose Mark believes Marxism on epistemically irrational grounds – he has no evidence for it, but it caters to his preexisting biases and dispositions. Suppose Mark slightly enjoys being Marxist; he values being Marxist at, say, five dollars. Mark would be willing to overcome his biases and change his mind, but only if being Marxist started to cost him more than five dollars. Now suppose that Mark gets an opportunity to vote for the disastrous Marxist candidate or a decent run-of-the-mill Democrat. While it’s a disaster for Mark if the Marxist wins, it’s not a disaster for him to vote Marxist. Since Mark’s vote counts for so little, the expected negative results of voting for the Marxist are infinitesimal, just as the expected value of voting Democrat is infinitesimal. Mark might as well continue to be and vote Marxist.

The problem, again, is that what goes for Mark goes for us all. Few of us have any incentive to process political information in a rational way.

Beck adds:

Sometimes during experimental studies in the lab, [Jennifer] Jerit says, researchers have been able to fight against motivated reasoning by priming people to focus on accuracy in whatever task is at hand, but it’s unclear how to translate that to the real world, where people wear information like team jerseys. Especially because a lot of false political beliefs have to do with issues that don’t really affect people’s day-to-day lives.

“Most people have no reason to have a position on climate change aside from expression of their identity,” Kahan says. “Their personal behavior isn’t going to affect the risk that they face. They don’t matter enough as a voter to determine the outcome on policies or anything like this. These are just badges of membership in these groups, and that’s how most people process the information.”

The part about people wearing ideologies like team jerseys is especially relevant when it comes to the mockery-and-sarcasm crowd – because for a lot of them, the whole thing is really just a kind of game. They aren’t engaging in ideological debates in order to explore nuances, harmonize disparate ideas, and uncover truth – they’re not even particularly concerned with changing their opponents’ minds – they’re just doing it because arguing and shit-talking are fun. Mudslinging is cathartic. The way they handle these issues isn’t actually going to change anything one way or another anyway – it’s not like they’re the ones who are actually sitting in the Oval Office – so why not just cut loose and throw self-restraint to the wind?

In short, then, the problems caused by behaviors like motivated reasoning, tribalism, and signaling can come from two different directions. On the one hand, there are the warriors – people who are deeply passionate about the issues and take them very seriously, but who, as a result of this passion, end up taking themselves far too seriously as well. They adopt a crusader mentality, they refuse to cede an inch of ground to their enemies lest they lose face, and so forth. (These are the soldiers in Galef’s “soldier vs. scout” dichotomy.) On the other hand, there are the gleeful cynics – people who don’t take themselves as seriously, but who don’t take the issues particularly seriously either. To them, everything is just a joke or a game. (If we wanted to expand Galef’s theme to include a third category, we might call them the jesters or something.)

Both of these groups – the warriors and the cynics – are prone to things like condescension and dismissiveness toward their opponents’ views. They’re also prone to making a lot of noise and dominating the vast majority of public discourse. Lost amidst all the noise produced by these factions, though, are the people who take the issues seriously, but who don’t take themselves so seriously that they’re afraid to ever be wrong or lose face. It’s possible to recognize the importance of big ideas and actively engage with them, yet still retain a sense of humility in doing so. But unfortunately, that kind of mindset is too hard to find these days – and our ability to collectively progress toward truth suffers as a result.

The cynics may be right that one person’s individual behavior doesn’t usually affect very much on its own. But the thing is, attitudes and behaviors never remain confined to one person – they’re contagious. Every social norm, good or bad, starts with one person or group deciding to conduct themselves in a certain way, and spreads from there. And when more and more groups and individuals decide that they’d rather score superficial debate points by dunking on their opponents than turn those opponents into allies and make real progress on their stated goals, then the most tactful way to describe it is “counterproductive.”

IX.

The problem with refusing to constructively engage with people who disagree with you (well, one of the many problems) is that merely sneering at opposing ideas doesn’t actually make them go away. If all you’re doing is deriding and disavowing your opponents’ beliefs, then your opponents will usually just go right on believing them, because you haven’t given them a good argument to convince them otherwise. Sure, it’s understandable that you might not want to give the appearance of legitimizing your opponents’ arguments by taking them seriously. As Noah Berlatsky writes:

Bad ideas aren’t worth debating […] Lots of people believe in [bad political ideas], but lots of people believe in astrology, too. That doesn’t mean that mainstream publications should start running serious op-eds about what the arrangement of the stars says about the major political issues of our day.

But the thing is, there isn’t as much of a pressing need to address erroneous beliefs like astrology because the existence of such beliefs doesn’t have all that much of an impact on the daily workings of our world. If the President of the United States suddenly started basing major decisions like whether to go to war on astrological readings, then yes, it absolutely would be necessary to address those ideas – not just in spite of the fact that they’re so bad, but specifically because they’re so bad. Whether you want to recognize such misguided beliefs as legitimate is largely beside the point here, because what you need to be more concerned with is the fact that other people do regard them as legitimate enough to be worth taking seriously. Simply sticking your fingers in your ears and refusing to dignify them with a response doesn’t change the fact that those ideas still exist – it just means that they’re not being disproven. As Wong writes:

“So we actually have to have a discussion on why there’s no worldwide Jewish conspiracy to control the media?!?! Really?!?”

YES, BECAUSE THE DISCUSSION WILL OCCUR WITH OR WITHOUT YOU, AND IT IS PHYSICALLY IMPOSSIBLE TO STOP IT. ALL THAT IS TO BE DECIDED IS WHETHER OR NOT WE ADD OUR VOICES.

So if you imagine, for instance, some impressionable young person just discovering a particularly bad set of ideas for the first time, and finding some of the points persuasive because they don’t know any better, and then bringing up those points in a public discussion – then what happens when they don’t encounter any kind of serious counterargument or refutation of their points, but instead are only met with sarcastic taunts and insults? Well, it’s possible that they might become embarrassed and decide to reevaluate the validity of their beliefs – but that’s not always the likeliest outcome, especially if their detractors are people from an opposing ideological tribe. What’s more likely to happen is that they come away from the exchange thinking that the people on the other side are a real bunch of assholes who can’t even have a meaningful conversation; and they’ll probably also come away with the impression that since nobody actually made a legitimate attempt to disprove their arguments in a serious way, then such a disproof must not actually exist, and therefore they must be correct in their beliefs. If they only ever hear responses like “You’re too wrong to even be worth talking to,” or “I don’t even have the time to explain how wrong that is” (or worse, if they’re just angrily shouted down), then their natural response will most likely be, “Aha, they’re trying to dismiss me because they don’t actually have a good counterargument; that means the things I’m saying must be correct!” Wong continues:

If [some young person who doesn’t know any better sees] a protest in the park, and they see a white nationalist in a suit and tie speaking calmly into a microphone, saying “Well we of course don’t want to exterminate any minorities, all we want is peace between races, and history has shown that when races are separate, blah blah blah blah blah…” – that person speaking very calmly and very rationally – and then a screeching protestor starts slapping at them and clawing at their face, screaming “NAZI FASCIST NAZI” […] If you’re coming in cold, which of those two looks like the rational, thinking human being?

And Michelle Goldberg adds:

Some might argue that respectfully debating [bad] ideas […] legitimates them. There’s something to this, but refusing to debate carries a price as well – it conveys a message of weakness, a lack of faith in one’s own ideas. Ultimately, the side that’s frantically trying to shore up taboos is the side that’s losing.

Mind you, this doesn’t mean that you have to go out of your way to empower the promoters of bad ideas, or give them more of a platform to spread those ideas by inviting them to speak at your university or whatever. But you do have to actually give them some kind of real response if you want them to change. Trying to mock or bully opposing ideas into submission instead of refuting them only causes the targets of your disdain to entrench themselves more deeply into whichever ideology they’ve adopted, and emboldens them to take a more antagonistic approach to subsequent debates.

Granted, this isn’t always what happens. If you browbeat enough people out of enough debates, then you’ll certainly deter some of them from participating in future exchanges altogether. A lot of times, what happens when somebody gets viciously blasted for their beliefs is not that they’re emboldened to become more assertive, but that the whole experience leaves such a bad taste in their mouth that they decide to just shut down, retreat back into their ideological bubbles, and stop trying to engage with people who disagree with them altogether. (They still maintain the same beliefs they had before, naturally, but now they just keep those beliefs to themselves, so they don’t have to deal with the hassle of being hounded by bullies on the other side.) On a surface level, this outcome might seem like a desirable one if you’re their opponent. From your point of view, a troublesome person whose ideas are evil and wrong has been effectively silenced, setting an example and deterring other would-be wrongdoers. Mission accomplished, right?

The problem, though, is that “would-be wrongdoers” aren’t the only ones influenced by this chilling effect; there’s collateral damage as well. When you make it clear that you aren’t taking any prisoners in your ideological battle – and that you aren’t particularly interested in differentiating degrees of guilt among your targets – it signals to others that you’re inevitably going to catch some undeserving people in your crosshairs and punish them right alongside the deserving ones. And when that happens, what you end up deterring isn’t “evil” or “wrongdoing,” but people’s ability to speak openly about their ideas and beliefs at all, because they fear that you’ll misconstrue them as being malicious and punish them accordingly. This phenomenon has made it harder in recent years for thoughtful people to talk candidly about their ideas, because they’re paralyzed by the thought that if they’re wrong – even if they’ve just made an honest mistake or unwittingly missed some key piece of information – they’ll be shouted down under a barrage of criticism and abuse. But when so many people decide to simply opt out of the public conversation and retreat back into their comfort zones, those among them who really are mistaken in their beliefs never get the opportunity to correct them, because they never put those beliefs out there to be judged by others in the first place. As Stephens writes:

[This brand of hyper-reactive, antagonistic discourse] has made the distance between making an argument and causing offense terrifyingly short. Any argument that can be cast as insensitive or offensive to a given group of people isn’t treated as being merely wrong. Instead it is seen as immoral, and therefore unworthy of discussion or rebuttal.

The result is that the disagreements we need to have – and to have vigorously – are banished from the public square before they’re settled. People who might otherwise join a conversation to see where it might lead them choose instead to shrink from it, lest they say the “wrong” thing and be accused of some kind of political -ism or -phobia. For fear of causing offense, they forego the opportunity to be persuaded.

Take the arguments over same-sex marriage, which [they] are now debating in Australia. My own views in favor of same-sex marriage are well known, and I hope the Yes’s wins by a convincing margin.

But if I had to guess, I suspect the No’s will exceed whatever they are currently polling. That’s because the case for same-sex marriage is too often advanced not by reason, but merely by branding every opponent of it as a “bigot” – just because they are sticking to an opinion that was shared across the entire political spectrum only a few years ago. Few people like outing themselves as someone’s idea of a bigot, so they keep their opinions to themselves even when speaking to pollsters. That’s just what happened last year in the Brexit vote and the U.S. presidential election, and look where we are now.

If you want to make a winning argument for same-sex marriage, particularly against conservative opponents, make it on a conservative foundation: As a matter of individual freedom, and as an avenue toward moral responsibility and social respectability. The No’s will have a hard time arguing with that. But if you call them morons and Neanderthals, all you’ll get in return is their middle finger or their clenched fist.

When you habitually try to bully people out of ideological conversations, it produces a kind of malignant selection process. A certain number of your targets will, indeed, decide that it’s not worth the headache and stop trying to engage with you. Those that remain undeterred, though, will by definition be the ones who are the most committed diehards – the brash, argumentative partisans who don’t give a fuck if you criticize them because they know they’re right and you’re the one who’s an idiot. Once you’ve chased off all the more moderate people whose fervor is only lukewarm, it creates the conditions for the extremists to thrive, because they’re the only ones left. And as these extremists spew their venom and chase off the more moderate members of your side in turn, it causes the effect to build on itself and escalate. All the reasonable people get fed up and leave, until the only ones left to control each side’s narrative are the unreasonable ones. The more they dominate the narrative, the more they marginalize everyone else who isn’t as fanatical as they are, treating anyone who deviates even the slightest bit from their ideology as a depraved heretic. And they’ll even turn their hostility against members of their own side without a moment’s hesitation, sniffing out and “exposing” ostensible members of their own side simply for disagreeing with them on one point or for making one mistake. There is no margin for error or deviation – anything less than their extreme definition of perfection means that you might as well be one of their enemies. Jay Smooth talks about this absolutist mindset in his discussion of race:

In most […] situations, when the possibility arises that we made a mistake, we are usually able to take a few deep breaths and tell ourselves: “I’m only human, everyone makes mistakes.” But when it comes to conversations involving race and prejudice, for some reason, we tend to make the opposite assumption. We deal with race and prejudice with this all-or-nothing, good person/bad person binary, in which either you are racist or you are not racist. If you’re not batting a thousand, then you’re striking out every time. And this puts us in a situation where we’re striving to meet an impossible standard. [Anything less] than perfection means that you are a racist. That means any suggestion that you’ve made a mistake or you’ve been less than perfect is a suggestion that you’re a bad person, so we become averse to any suggestion that we should consider our thoughts and actions. And it makes it harder for us to work on our imperfections. When you believe that you must be perfect in order to be good, it makes you averse to recognizing your own inevitable imperfections, and that lets them stagnate and grow. So the belief you must be perfect in order to be good is an obstacle to being as good as you can be.

Of course, this mentality isn’t just limited to race issues; it permeates everything from politics to religion. Religion, in particular, has an especially nasty history when it comes to this kind of thing; the whole reason why terms like “holier than thou” and “witch hunt” became household terms in the first place was because of fundamentalist religious campaigns waged not against disbelievers, but against perfectly devout believers who simply weren’t fanatical enough in their belief to meet some extreme standard of piety. Even today, innocent religious people around the world are being killed in mass numbers by members of their own faith because those violent fundamentalists judge anyone less fundamentalist than themselves to be heretical degenerates deserving of death.

The ideological “purity tests” conducted here in the West don’t tend to be nearly as extreme, obviously, but the gradual ratcheting up of what standards a person must meet in order to be considered “clean” takes a very similar form. Here’s Wong again:

I’ve never been around an activist group that didn’t turn into an endless series of petty purity tests. I was raised in a church where everyone was looking for more and more inconsequential things to judge each other by. R-rated movies were of course forbidden, but which prime-time network TV shows were permissible? Any of them? Of course rock music was of the devil, but what about country? Aren’t those songs about faith, kind of?

The natural evolution is toward tighter and tighter criteria for what behavior gets you shunned from the group. The end result is that the central cause, the group’s [reason for existing], can be as pure as the driven snow, and yet the tone will get more and more toxic over time, the members becoming less and less charitable with each other. Here, for example, is what my Twitter timeline looks like:

“Nazis are bad and must be opposed.”

Agree!

“People who enable or defend Nazis must also be opposed.”

Makes sense!

“Unlawful violence is perfectly acceptable when opposing Nazis and their enablers.”

Wait, I’m not sure I’m on board with that …

“Anyone who opposes the use of unlawful violence against Nazis is also a Nazi enabler.”

What? No! I’m one of the good guys!

“Also, if you think about it, all American institutions and capitalism itself help support white supremacy, therefore all are Nazi enablers and eligible for violent retribution.”

Hey, I think you just declared war on literally everyone who isn’t currently in the room with you.

You hear experts talk about how extremists get “radicalized” — how a guy went from a mild-mannered food inspector in San Bernardino to a brainwashed suicide attacker in the course of a year or so. But it really isn’t a mystery, and we all form less-murderous versions of this. All it takes is a closed like-minded social circle in which it’s considered unacceptable to disagree with the group, and then devote that group to hating something. It doesn’t even matter if the thing truly deserves hating — it still turns toxic. In fact, it works better if it does. “How can you criticize any flaw in our group’s behavior when the other side is Nazis! That’s literally saying that both sides are the same! The mere existence of pure evil on the other side mathematically means our side is pure good!”

At that point, no criticism is possible and there is nothing to moderate the rage. The rhetoric ratchets higher and higher as each member tries to top each other (to prove their own righteousness by demonstrating they hate the target most), and there is no method for reining it in. Moderate voices from outside the group are excluded completely, anyone from the inside who takes a moderate tone can be shouted down with accusations of being an enemy sympathizer. Soon, everything from objectively grotesque insults to elaborate torture fantasies are tossed around without a second thought.

When things have escalated to such an intense degree, the more vocal and aggressive ideologues will sometimes even target neutral bystanders, criticizing them specifically because they’re neutral. Just going about your business and trying not to get involved in contentious debates, in their eyes, means that you must be complicit in wanting to maintain the status quo. The fact that you aren’t speaking out against what they perceive as heresy or injustice must mean that you tacitly support it and want it to continue. In this way, then, it’s possible for you to be condemned and labeled an enemy not even because of anything you’ve done, but because of what you haven’t done. The mere absence of an opinion or belief can be a mortal crime, even if all you’re doing is sitting at home tending your garden or whatever. This mentality is grimly reminiscent of a quotation from General Ibérico Saint John, a repressive Argentine governor from the 1970s, who said:

First we will kill all the subversives; then we will kill their collaborators; then . . . their sympathizers, then . . . those who remain indifferent; and finally we will kill the timid.

As Baumeister mentioned in his comments on terrorism earlier, the short version of this idea is “Anyone who’s not with us is against us.” (You might also remember George W. Bush saying this at the outset of his war against terrorism, ironically enough.) And although it’s true that sometimes a person remaining silent on a certain topic can in fact be a good indication of where they stand on it – and although it’s true that sometimes remaining neutral in an asymmetrical conflict does in fact favor the more powerful side by default (more on both of these points later) – that doesn’t mean that everyone who keeps themselves uninvolved in a particular conflict is just as bad as those who actively fight on the side of the enemy. And treating them as if they are tends to become a self-fulfilling prophecy. When you sort everyone into black-and-white categories of friend or foe and insist that only those whose ideological fervor precisely matches your own can be considered a friend – and that everyone else is considered a foe and must be punished equally harshly without regard for how bad their transgressions are – you shouldn’t be surprised when the category of enemies you’ve created turns against you. Alexander recounts an illuminating story from ancient China:

Chen Sheng was an officer serving the Qin Dynasty, famous for their draconian punishments. He was supposed to lead his army to a rendezvous point, but he got delayed by heavy rains and it became clear he was going to arrive late. The way I always hear the story told is this:

Chen turns to his friend Wu Guang and asks “What’s the penalty for being late?”

“Death,” says Wu.

“And what’s the penalty for rebellion?”

“Death,” says Wu.

“Well then…” says Chen Sheng.

And thus began the famous Dazexiang Uprising, which caused thousands of deaths and helped usher in a period of instability and chaos that resulted in the fall of the Qin Dynasty three years later.

The moral of the story is that if you are maximally mean to innocent people, then eventually bad things will happen to you. First, because you have no room to punish people any more for actually hurting you. Second, because people will figure if they’re doomed anyway, they can at least get the consolation of feeling like they’re doing you some damage on their way down.

This is why antagonistic discourse can be so self-defeating. Once you’ve made it clear that you’re willing to abandon honesty and reason in the pursuit of your goals, it galvanizes popular opinion against you and undermines your cause by driving away would-be allies. The more you indulge in gratuitously combative or condescending behavior toward your opponents, the more you confirm (both in their minds and in the minds of neutral onlookers) that their side must be the persecuted underdog for having to endure such ill treatment from people like you – and the easier you make it for them to recruit opposition against you by portraying your side as vindictive and unreasonable.

X.

It’s worth noting, by the way, that this is an especially major hazard when it comes to highly visible public spectacles like the mass street protests mentioned earlier. The mass movements that give rise to such demonstrations – Occupy Wall Street, the Tea Party movement, Black Lives Matter – often arise from legitimate grievances based on genuinely upsetting inciting incidents (the housing crash, the Wall Street bailout, the killing of innocent black people by police, etc.) – and accordingly, they often hold quite a bit of legitimacy in the public eye when they first start off. But as the issues in question gain more and more national attention, and as public street protests against them start to appear and grow in number, what inevitably tends to happen is that some of the more pugnacious-minded members of these protests get carried away and start looking for fights, screaming hateful slogans, and destroying property – and when that happens, such outbursts can turn the demonstrations from symbolic gestures of discontent into counterproductive acts of self-sabotage. At that point, it becomes clear that the main function these now-escalated demonstrations are serving is no longer to actually help their ostensible cause in any constructive way (as might be possible with more tactfully self-controlled protests), but simply to allow their participants to get out some of their aggression in a public setting. And as David Frum writes, this is why such demonstrations so often end up doing more harm than good for the very things they’re trying to promote:

[These kinds of overly combative] demonstrations are exercises in catharsis, the release of emotions. Their operating principle is self-expression, not persuasion. They lack the means, and often the desire, to police their radical fringes, with the result that it’s the most obnoxious and even violent behavior that produces the most widely shared and memorable images of the event. They seldom are aimed at any achievable goal; they rarely leave behind any enduring program of action or any organization to execute that program. Again and again, their most lasting effect has been to polarize opinion against them—and to empower the targets of their outrage.

Research on the subject bears this out. According to one study titled “Extreme Protest Tactics Reduce Popular Support For Social Movements”:

We find across three experiments that extreme protest tactics [for example the use of inflammatory rhetoric, blocking traffic, damaging property, and disrupting other citizens’ everyday activities] decreased popular support for a given cause because they reduced feelings of identification with the movement. Though this effect obtained in tests of popular responses to extreme tactics used by animal rights, Black Lives Matter, and anti-Trump protests […] we found that self-identified political activists were willing to use extreme tactics because they believed them to be effective for recruiting popular support.

Contrary to the protestors’ perception that their extreme actions help their cause, though, the popular perception of what they’re doing tends to be more akin to the kind of thing Nathan J. Robinson describes here:

Here, a crowd gathers around a man wearing a MAGA hat, with a masked Antifa member closing in on him and shouting “Fuck you! Fuck you! Fuck you!” This is, first of all, inarticulate and stupid. (“Fuck you!” is a statement empty of any actual leftist content, it’s just a grunt.) But I also don’t see any principled, strategic, and disciplined anti-fascist action here. I just see aggressive, macho white guys being aggressive, macho white guys. And it seems to me as if they’re enjoying themselves just a little too much. I certainly don’t see either any “moral high ground” being claimed or any actual useful combating of Nazi ideology. (I do, however, see grist for new columns in Breitbart and the National Review.)

Even protests that are unusually self-restrained and nonviolent often can’t help but to have their share of bad optics like this. Recall the Chinese Robber Fallacy – even if 99.9% of protestors are on their best behavior, that still means that in a crowd of 1000 people there will be one overly belligerent jackass who makes a scene and ends up getting all the media attention. Whether it’s something as flagrant as outright violence or something as simple as leaving behind a ton of trash after they’ve finished their peaceful protest, these kinds of demonstrations often seem to provide more material for discrediting the movement than for bolstering it. And once the protests start drawing the attention of the media, the national conversation begins to shift from talking about the actual issues at hand to talking about whether the protestors are “going too far” in their tactics. Public figures who otherwise might have felt the pressure of having to defend unpopular ideas against widespread backlash now have an easy way to sidestep that pressure by deflecting the focus of the conversation onto the conduct of the protestors instead. Rather than having a national debate on Wall Street corruption, or government overreach, or police brutality, or systemic racism, the national debate instead ends up being about whether the hooligans marching in the streets and smashing up storefronts are really the kind of people who should be dictating complex public policies or not. And unfortunately for the protestors – especially the ones who actually are trying to accomplish worthy goals through peaceful means – the answer typically turns out to be no.

It doesn’t help these peaceful activists’ cause when so many of their comrades-in-arms genuinely do think that violence is a legitimate means for achieving their desired ends. It’s all too common to hear people try to rationalize violence against their ideological opponents by claiming that their enemies, simply by existing, are committing (or at least promoting) violence against them, and that therefore they’re justified in fighting back in self-defense. The fact that, say, racists exist means that they pose a threat to whichever races they’re prejudiced against, so there’s nothing immoral about assaulting them wherever you see them. The fact that people exist who approve of taxation means that they’re calling for the forcible seizing of people’s property, so there’s nothing immoral about using force to stop them. The fact that people exist who are in favor of abortion rights means that they pose a threat to the lives of millions of unborn babies, so there’s nothing immoral about bombing Planned Parenthood centers. The fact that nonbelievers exist who resist the true faith means that they pose a threat to the eternal souls of billions of people, so there’s nothing immoral about using force to silence them before they can lead others to Hell. And so on. Usually, these rationalizations for using violence aren’t quite so extreme; more often than not, people just contrive them as an excuse for the sporadic occasions when someone on their side attacks someone on the other side (as in the recent example of a leftist activist punching a white nationalist in the face at a public demonstration). If you actually take them at their word, though, and follow their reasoning to its logical conclusion, it leads to a very dark place, as Robinson points out:

Here is what I am worried about: I believe that unless the question of violence is treated carefully and responsibly, it could lead to something very bad indeed for the left. For example, say more people on the left come around to [this kind of] reasoning, and believe that fascists should not be permitted to speak publicly. And say they also blur the distinction between neo-Nazis and everyday Trump supporters, who are all lumped under the catch-all category “fascists.” And since fascism is horrific, and the Antifa principle is that it must be stopped “by any means necessary,” there is very little check on the permissible uses of violence. My fear is that, sooner or later, some blonde teenage girl wearing a MAGA hat, or some disabled veteran in a Trump shirt, is going to end up getting put in a coma. And when that happens, the left will face an almighty hellstorm of right-wing rage. I want to know why people are so confident that their endorsement of violent methods wouldn’t lead to this. But all I hear are the same lines, over and over: You have to “nip Nazis in the bud,” “fascism doesn’t go away when it’s asked politely,” etc.

[…]

When you say “[fascism] had to be stopped by force,” [in the historical context of WWII] this involved killing people; do you believe that in the contemporary United States, killing people for holding white supremacist beliefs is acceptable?

[…]

[When you talk about violent tactics against “fascists” being self-defense,] what kind of self-defense are we talking about? What are the acts that are being defended, and in what circumstances? Never trust anyone who speaks in abstractions and refuses to say exactly what it is they are justifying and refuses to answer the most important questions.

Resorting to violence does have a certain gut appeal for many people; if you believe your side is the persecuted underdog, there is a certain heroic romance in the idea of rising up against your oppressors and fighting back, not just in a figurative sense but in a literal one. Having said that, though, if you’re actually right about being the underdog, there’s a problem. As Gene Sharp (who literally wrote the book on resisting oppressive regimes) puts it:

By placing confidence in violent means, one has chosen the very type of struggle with which the oppressors nearly always have superiority.

Alexander elaborates, adding that this point can apply to other forms of dubious debate tactics as well:

Logical debate has one advantage over narrative, rhetoric, and violence: it’s an asymmetric weapon. That is, it’s a weapon which is stronger in the hands of the good guys than in the hands of the bad guys. In ideal conditions (which may or may not ever happen in real life) – the kind of conditions where everyone is charitable and intelligent and wise – the good guys will be able to present stronger evidence, cite more experts, and invoke more compelling moral principles. The whole point of logic is that, when done right, it can only prove things that are true.

Violence is a symmetric weapon; the bad guys’ punches hit just as hard as the good guys’ do. It’s true that hopefully the good guys will be more popular than the bad guys, and so able to gather more soldiers. But this doesn’t mean violence itself is asymmetric – the good guys will only be more popular than the bad guys insofar as their ideas have previously spread through some means other than violence. Right now antifascists outnumber fascists and so could probably beat them in a fight, but antifascists didn’t come to outnumber fascists by winning some kind of primordial fistfight between the two sides. They came to outnumber fascists because people rejected fascism on the merits. These merits might not have been “logical” in the sense of Aristotle dispassionately proving lemmas at a chalkboard, but “fascists kill people, killing people is wrong, therefore fascism is wrong” is a sort of folk logical conclusion which is both correct and compelling. Even “a fascist killed my brother, so fuck them” is a placeholder for a powerful philosophical argument making a probabilistic generalization from indexical evidence to global utility. So insofar as violence is asymmetric, it’s because it parasitizes on logic which allows the good guys to be more convincing and so field a bigger army. Violence itself doesn’t enhance that asymmetry; if anything, it decreases it by giving an advantage to whoever is more ruthless and power-hungry.

The same is true of [propagandistic tools like emotional narratives and] documentaries. As I said before, [an anti-Trump activist] can produce as many anti-Trump documentaries as he wants, but Trump can fund documentaries of his own. He has the best documentaries. Nobody has ever seen documentaries like this. They’ll be absolutely huge.

And the same is true of rhetoric. Martin Luther King was able to make persuasive emotional appeals for good things. But Hitler was able to make persuasive emotional appeals for bad things. I’ve previously argued that Mohammed counts as the most successful persuader of all time. These three people pushed three very different ideologies, and rhetoric worked for them all. [Some act] as if “use rhetoric and emotional appeals” is a novel idea for Democrats, but it seems to me like they were doing little else throughout the election (pieces attacking Trump’s character, pieces talking about how inspirational Hillary was, pieces appealing to various American principles like equality, et cetera). It’s just that they did a bad job, and Trump did a better one. The real takeaway here is “do rhetoric better than the other guy”. But “succeed” is not a primitive action.

Unless you use asymmetric weapons, the best you can hope for is to win by coincidence.

That is, there’s no reason to think that good guys are consistently better at rhetoric than bad guys. Some days the Left will have an Obama and win the rhetoric war. Other days the Right will have a Reagan and they’ll win the rhetoric war. Overall you should average out to a 50% success rate. When you win, it’ll be because you got lucky.

And there’s no reason to think that good guys are consistently better at documentaries than bad guys. Some days the NIH will spin a compelling narrative and people will smoke less. Other days the tobacco companies will spin a compelling narrative and people will smoke more. Overall smoking will stay the same. And again, if you win, it’s because you lucked out into having better videographers or something.

I’m not against winning by coincidence. […] But you shouldn’t confuse it with a long-term solution.

If you really think that the conflict you’re involved in is an asymmetric one – that your side is the oppressed underdog fighting back against a powerful enemy – then the tactics you choose should be asymmetric in the other direction, so as to tip the advantage back in your favor. (And likewise, if you think you already hold the advantage, you should prefer those same asymmetric tactics for the same reason, in order to increase your advantage even further.) But violence is the most symmetric tactic in this regard – and accordingly, it’s almost always the one that will be the most counterproductive to whichever cause you’re fighting for. That’s why, if you truly care about your cause, it should alarm you when people on your side choose to engage in direct violence; and it should also alarm you when they choose to engage in all the other, more figurative ways of “fighting dirty” that we’ve talked about up to this point – demonizing the other side, refusing to take them seriously, denying them any kind of intellectual charity, and so forth – because these kinds of behaviors are exactly the kinds of things that tend to cause inter-group conflicts to escalate and turn ugly, creating the possibility for hatred and violence in the first place.

XI.

Not every disagreement is a life-or-death fight for survival – but by framing every conflict as one, it’s more likely to become so. When both sides have convinced themselves that they’re heroes fighting a high-stakes battle against the forces of evil, then when the two sides collide, it’s not just a matter of dispassionately comparing facts and figuring out which side is correct – it’s more like a holy crusade. As Will Storr writes:

Facts do not exist in isolation. They are like single pixels in a person’s generated reality. Each fact is connected to other facts and those facts to networks of other facts still. When they are all knitted together, they take the form of an emotional and dramatic plot at the centre of which lives the individual. When a climate scientist argues with a denier, it is not a matter of data versus data, it is hero narrative versus hero narrative, David versus David, tjukurpa versus tjukurpa. It is a clash of worlds.

[This phenomenon] exposes this strange urge that so many humans have, to force their views aggressively on others. We must make them see things as we do. They must agree, we will make them agree. There is no word for it, as far as I know. ‘Evangelism’ doesn’t do it: it fails to acknowledge its essential violence. We are neural imperialists, seeking to colonise the worlds of others, installing our own private culture of beliefs into their minds. I wonder if this response is triggered when we pick up the infuriating sense that an opponent believes that they are the hero, and not us. The provocation! The personal outrage! The underlying dread, the disturbance in reality. The restless urge to prove that their world, and not ours, is the illusion.

[We have an] inherent desire to slay Goliath, to colonise the mental worlds of others, to win.

This impulse is powerful enough when it’s just two people contending against each other; but when you’ve got two entire groups going head to head, the effect is multiplied exponentially as group loyalty incentives take over. Baumeister discusses how good intentions, when taken to an extreme, can often lead to evil consequences within this group dynamic:

One far-reaching difference between idealistic evil [i.e. evil rooted in ideological motives] and other forms of evil is that idealistic evil is nearly always fostered by groups, as opposed to individuals. When someone kills for the sake of promoting a higher good, he may find support and encouragement if he is acting as part of a group of people who share that belief. If he acts as a lone individual, the same act is likely to brand him as a dangerous nut.

One reason for the importance of groups in idealistic evil is the power of the group to support its high, broad ideals. Abstract conceptions of how things ought to be gain social reality from the mere fact of being shared by a group. Without that group context, they are merely the whims of individuals, and as such they do not justify the use of violent means. To put this more bluntly: It is apparently necessary to have someone else tell you that violent means are justified by high ends. If no one of importance agrees with you, you will probably stop short of resorting to them. But if you belong to a group that shares your passionate convictions and concurs in the belief that force is necessary, you will be much more likely to resort to force. People seem to need others to validate their beliefs and opinions before they put them into practice, especially in a violent and confrontational way.

This is one of the less recognized aspects of the much-discussed experiments done by Stanley Milgram. In those studies, an experimenter instructed an ordinary person (a volunteer) to deliver strong electric shocks to another person, who was actually a confederate posing as an unsuspecting fellow subject. These ordinary people complied with instructions and delivered many severe shocks to the victim, far beyond the predictions and expectations of any of the researchers involved in the project.

As Milgram noted, many of the participants were upset about what they were doing. They showed signs of stress and inner conflict while they were pressing buttons that (supposedly) gave painful and even potentially harmful or lethal shocks to another person. […] Such distress is the normal reaction to hurting others.

Despite their inner distress, however, the vast majority of participants delivered increasingly severe shocks, up to the maximum level possible. A crucial factor was the presence of a fellow human being assuring them that their actions were justified and, indeed, were their duty. They had nothing to gain by inflicting harm, nor did they get any prestige or other advantage from hurting the victim, but their actions did presumably serve the commendable goal of advancing scientific progress. The presence of the experimenter to represent the community of scientific researchers was a central aspect of this experiment. By pressing the button, the subject participated in the group’s worthy enterprise.

The importance of the interpersonal dimension was indicated by the effect of physical distance. In later replications of the study, Milgram varied how close the subject sat to the experimenter as opposed to the victim. Being closer to the victim made the subject less willing to deliver hurtful shocks. Being closer to the experimenter (the authority figure) made subjects more willing.

In most cases, of course, such extreme acts are committed by devoted members of the group, rather than by temporary recruits. Thus, they share the group’s beliefs and ideals and are presumably willing to do what will further the positive goals of the group. The group is an important source of moral authority. Individual acts may be questioned, which usually means questioning them in terms of how well they fit into the recognized goals and procedures of the group. But the group itself is above question.

This pattern of deferring to the group’s moral authority is seen over and over again in violent groups. Consider again the Khmer Rouge. Like many Communist parties, it was a firm believer in the practice of self-criticism by individual members. But this meant examining one’s own acts (and thoughts or feelings) to see whether they corresponded to the proper party line. Criticism of the party itself was strictly off-limits.

Criticism sessions in Western Communist groups showed the same pattern. Individuals sat around and scrutinized themselves to see how they fit or failed to fit the official party line, but they never questioned the party line. When the party adopted a new position, individual members scrambled to agree with it and to convince themselves that they had believed this all along. Arthur Koestler cynically described the process from his days as a Communist: “We groped painfully in our minds not only to find justifications for the line laid down, but also to find traces of former thoughts which would prove to ourselves that we had always held the required opinion. In this operation we mostly succeeded.” Whether one looks at religious warriors, members of Fascist or Communist groups, or modern members of street gangs, one finds the same pattern: The group is regarded as above reproach. The members of the group may sometimes think rather poorly of one another, but the group as a whole is seen as supremely good.

Why do groups seem to have this effect? Although several factors contribute, it is necessary to begin with the fundamental appeal of groups. Probably this appeal is deeply rooted in human nature. The human tendency to seek a few close social bonds to other people is universal, and nearly everyone belongs to some sort of group, whether a family or a mass movement. People who lack close social ties are generally unhappy, unhealthy, and more vulnerable than other people to stress and other problems. Some theorists have argued that the tendency to form small groups is the most important adaptation in human evolution, ranking even above intelligence, and so natural selection has shaped human nature to need to belong to groups.

The need to belong may be universal, but it is not always equally strong. One factor that seems especially to intensify the need is competition with other groups. Thus, one could debate the evolutionary benefits of belonging to a group, noting that the advantages of sharing others’ resources could be offset by the pressure to share one’s own resources with them. There is no doubt, however, about the competitive disadvantage of not belonging to a group when there are other groups. If there is some scarce resource such as food that a group wants and a lone individual also wants, the group is almost sure to get it. Thus, the need to bond with other people may be stimulated by the presence of a rival or enemy group.

This tendency toward intergroup competition fits well with what we have already seen. The words Devil and Satan are derived from words meaning “adversary” and “opponent,” which fits the view that rivalry or antagonism is central to the basic, original understanding of evil. Evil is located in the group that opposes one’s own group. The survival of one’s own group is seen as the ultimate good, and it may require violent acts against the enemy group.

[…]

The tendency toward intergroup competition sheds light on one aspect of what some researchers have called the discontinuity effect, that is, the pattern by which a group tends to be more extreme than the sum of its individual members. In particular, higher levels of aggression and violence are associated with group encounters than with individual encounters. People generally expect that a meeting between two individuals will be amiable, and that even if they have different goals or backgrounds they may find some way to compromise and agree. In contrast, people expect that a meeting between two groups will be less amiable and less likely to proceed smoothly to compromise. Laboratory studies support these expectations and indicate that groups tend to be more antagonistic, competitive, and mutually exploitive than individuals. In fact, the crucial factor seems to be the perception that the other side is a group. An individual will adopt a more antagonistic stance when dealing with a group than when dealing with another individual.

Probably the easiest way to understand this difference is to try a simple thought experiment. Imagine a white man and a black man encountering each other across a table in a meeting room, one on one, to discuss some area of disagreement. Despite the racial antagonism that is widely recognized in the United States today, the meeting is likely to proceed in a reasonably friendly fashion, with both men looking for some way to resolve the dispute. Now imagine a group of four white men meeting a group of four black men in the same room. Intuition confirms the research findings: The group dispute will be harder to resolve.

There is nothing sinister or wrong with wanting to belong to a group, of course. Groups may perpetrate evil, but they can also accomplish considerable good (and without doing any harm in the process). Groups can accomplish positive, virtuous things that go beyond what individuals can do. Groups do provide a moral authority, however, that can give individuals sufficient justification to perform wicked actions. Moreover, when groups confront each other, it is common for the confrontation to degenerate into an antagonistic and potentially hostile encounter. In these ways, the existence of a group can promote evil and violence.

When you’re part of a group, there’s a powerful incentive to signal your commitment to the cause by promoting the group’s ideology more aggressively than everyone else around you. Unfortunately, as Wong mentioned before, everyone else around you shares the same incentives – so the more aggressively each member of the group pushes, the more aggressively the other members of the group must push as well if they want to keep up. The result is that movements which start off promoting relatively reasonable goals can quickly spiral out of control and end up promoting dangerously radical ones. Adam Gopnik writes:

Reformers are famously prey to the fanaticism of reform. A sense of indignation and a good cause lead first to moral urgency, and then soon afterward to repetition, whereby the reformers become captive to their own rhetoric, usually at a cost to their cause. Crusaders against widespread alcoholism (as acute a problem in 1910 as the opioid epidemic is today) advanced to the folly of Prohibition, which created a set of organized-crime institutions whose effects have scarcely just passed. Progressive Era trade unionists, fending off corporate thugs, could steer into thuggish forms of Stalinism. Those with the moral courage to protest the Vietnam War sometimes became blinded to the reality of the North Vietnamese government – and on and on. It seems fair to say that a readiness to amend and reconsider the case being made is exactly what separates a genuine reforming instinct from a merely self-righteous one.

Social scientists actually have a term for this kind of phenomenon, as Brennan notes:

[Group] deliberation tends to move people toward more extreme versions of their ideologies rather than toward more moderate versions. Legal theorist Cass Sunstein calls this the “Law of Group Polarization.”

And the perverse consequence of this law, Brennan adds, is that being part of a group “often causes [individual members] to choose positions inconsistent with their own views – positions that [they] ‘later regret.’”

This is a particularly important point. Here’s Baumeister again:

A final way in which groups contribute to the escalation of violence emerges from the discrepancy between what the members of the group say and what they privately believe. The group seems to operate based to what the members of the group say to one another. It may often happen that the members harbor private doubts about what the group is doing, but they refuse to say them, and the group proceeds as if the doubts did not exist.

Social psychologists have known for several decades that a group is more than the sum or average of its members. Influential early research showed that groups sometimes make decisions that are riskier than what the average private opinion of the group members favor. More relevant recent work has shown that groups tend to communicate and make decisions based on what the members have in common, which may differ substantially from what the individual members think. In a remarkable series of studies, psychologists Garold Stasser and William Titus showed that groups sometimes make poor decisions even when they have sufficient information to do better. In one study, each group was supposed to decide which of two candidates to hire. Each member of the group came to the meeting armed with preliminary information about the candidates. The researchers provided more information favoring one candidate, Anderson, but they scattered it through the group. In contrast, the smaller amount of information favoring the other candidate, Baker, was concentrated so everyone knew it. Had the group really pooled their information, they would have discovered that the totality pointed clearly toward hiring Anderson. But instead of doing this, group after group merely talked about what they all knew in common, which was the information favorable to Baker. As a result, group after group chose Baker.

We have already seem how groups involved in evil will suppress doubts and dissent. [Our earlier discussion] quoted some of the people who worked in the Stalinist terror. These individuals said they privately doubted the propriety of what they were doing, but whenever anyone would begin to speak about such doubts, the others would silence him by insisting on the party line.

The Terror following the French Revolution showed how cruelty can escalate as a result of the pattern in which private doubts are kept secret and public statements express zeal and fervor. The Terror was directed mainly at the apparent enemies of the Revolution, and the Revolutionary government was constantly obsessed with internal enemies who presumably sought to betray it. Hence, those at the center of government were paradoxically its most likely victims. To criticize the Revolution of even to question its repressive measures was to invite suspicion of oneself. Accordingly, the members of the tribunal and others began to try to outdo each other in making strong statements about the need for harsh measures, because only such statements could keep them safe from the potential accusation of lacking the proper attitudes. The discussions and decisions featured mainly the most violent and extreme views, and the degree of brutality escalated steadily. Ironically, the leaders’ fear of one another caused them to become ever more violent, even draconian, with the result that they all really did have more and more to fear. And one by one, most of them were killed by the Revolution over which they were presiding.

Many people will sympathize with victims or question whether their own side’s most violent actions are morally right, but they will also feel ashamed of these doubts. What is said in the group, and what is likely to dictate the group’s actions, will be the most extreme and virulent sentiments. Whatever their private feelings, the members may express only the politically correct views of strong hatred of the enemy. In such an environment, the group’s actions may reflect a hatred that is more intense than any of its members actually feel. The group will be more violent than the people in it. Given all the other processes that foster escalation, it may not even be necessary for groups to have this effect forever. Once the members of the group are waist-deep in blood, it is too late for them to question the group’s project as a while, and so they are all the more likely to wade in even deeper.

[…]

Self-deception is an iffy business. People cannot convince themselves of anything that they might want to believe. In fact, the margin for self-deception is often rather small: People can only stretch the facts and the evidence to a limited degree. Groups, however, have several advantages in this regard, because they can support each other’s beliefs. When one is surrounded by people who all believe the same thing, any contrary belief gradually seems less and less plausible. It is not hard for us as outsiders to knock holes in the self-justifying reasoning of perpetrators, but such an exercise is misleading. Perpetrators often find themselves in groups where no one would think to raise objections and everyone would agree with even flimsy arguments that support their side.

Great evil can be perpetrated by small groups when the members strive to think alike and support the prevailing views. Unfortunately, power tends to produce just such situations, because the most powerful men (and presumably women) tend to surround themselves with like-minded associates, who become reluctant to challenge the prevailing views. Whether the group is a small religious cult, a set of corporate executives, or a ruling clique in a large country, the justifications expressed by everyone in the group will tend to gain force.

The communication patterns increase this force. Committees and other small groups tend to focus on what everyone believes and knows in common. Private opinions and extraneous facts are kept to oneself. […] Many individuals might have doubts and qualms but are reluctant to express them, so that everything said publicly in the group conforms to the party line. When the moral acceptability of some violent action is at issue, everyone keeps silent about his reservations and objections, and everyone repeats the overt justifications and rationalizations. As a result, everyone gets the impression that everyone else believes those justifications and that his own doubts are an anomaly. One may even feel guilty about having doubts; everyone else seems so certain.

One of the most remarkable phenomena of the twentieth century is the speed with which countries have abandoned their totalitarian beliefs, despite having advocated them with apparently minimal dissent for long periods of time or at great cost. All the Germans seemed to be behind Hitler, but immediately after the war, the Allies could find hardly anyone who professed to have sincerely believed in the Nazi world view. Likewise, the nations of Eastern Europe apparently supported their Communist governments with little criticism or dissent, and then in 1989 they abruptly abandoned Communism wholesale and embraced an entirely different approach to politics and economics.

Such rapid and radical conversions begin to make sense if one accepts the view of self-deception we have developed here. People want to believe what the government tells them. They want to believe that what their society is doing is the right thing. To help themselves believe, they suspend criticism and questioning, and they go along with others in expressing their preferred views. But when circumstances discredit the ruling view, they suddenly acknowledge all the problems and fallacies they had avoided, and they can say with reasonable honesty that they did not sincerely believe it after all. Their desire to believe makes a great deal of difference when the facts are ambiguous.

Pinker delves more deeply into the social and psychological mechanisms driving this type of behavior:

Why do people so often impersonate sheep? It’s not that conformity is inherently irrational. Many heads are better than one, and it’s usually wiser to trust the hard-won wisdom of millions of people in one’s culture than to think that one is a genius who can figure everything out from scratch. Also, conformity can be a virtue in what game theorists call coordination games, where individuals have no rational reason to choose a particular option other than the fact that everyone else has chosen it. Driving on the right or the left side of the road is a classic example: here is a case in which you really don’t want to march to the beat of a different drummer. Paper currency, Internet protocols, and the language of one’s community are other examples.

But sometimes the advantage of conformity to each individual can lead to pathologies in the group as a whole. A famous example is the way an early technological standard can gain a toehold among a critical mass of users, who use it because so many other people are using it, and thereby lock out superior competitors. According to some theories, these “network externalities” explain the success of English spelling, the QWERTY keyboard, VHS videocassettes, and Microsoft software (though there are doubters in each case). Another example is the unpredictable fortunes of bestsellers, fashions, top-forty singles, and Hollywood blockbusters. The mathematician Duncan Watts set up two versions of a Web site in which users could download garage-band rock music. In one version users could not see how many times a song had already been downloaded. The differences in popularity among songs were slight, and they tended to be stable from one run of the study to another. But in the other version people could see how popular a song had been. These users tended to download the popular songs, making them more popular still, in a runaway positive feedback loop. The amplification of small initial differences led to large chasms between a few smash hits and many duds – and the hits and duds often changed places when the study was rerun.

Whether you call it herd behavior, the cultural echo chamber, the rich get richer, or the Matthew Effect, our tendency to go with the crowd can lead to an outcome that is collectively undesirable. But the cultural products in these examples – buggy software, mediocre novels, 1970s fashion – are fairly innocuous. Can the propagation of conformity through social networks actually lead people to sign on to ideologies they don’t find compelling and carry out acts they think are downright wrong? Ever since the rise of Hitler, a debate has raged between two positions that seem equally unacceptable: that Hitler single-handedly duped an innocent nation, and that the Germans would have carried out the Holocaust without him. Careful analyses of social dynamics show that neither explanation is exactly right, but that it’s easier for a fanatical ideology to take over a population than common sense would allow.

There is a maddening phenomenon of social dynamics variously called pluralistic ignorance, the spiral of silence, and the Abilene paradox, after an anecdote in which a Texan family takes an unpleasant trip to Abilene one hot afternoon because each member thinks the others want to go. People may endorse a practice or opinion they deplore because they mistakenly think that everyone else favors it. A classic example is the value that college students place on drinking till they puke. In many surveys it turns out that every student, questioned privately, thinks that binge drinking is a terrible idea, but each is convinced that his peers think it’s cool. Other surveys have suggested that gay-bashing by young toughs, racial segregation in the American South, honor killings of unchaste women in Islamic societies, and tolerance of the terrorist group ETA among Basque citizens of France and Spain may owe their longevity to spirals of silence. The supporters of each of these forms of group violence did not think it was a good idea so much as they thought that everyone else thought it was a good idea.

Can pluralistic ignorance explain how extreme ideologies may take root among people who ought to know better? Social psychologists have long known that it can happen with simple judgments of fact. In another hall-of-fame experiment, Solomon Asch placed his participants in a dilemma right out of the movie Gaslight. Seated around a table with seven other participants (as usual, stooges), they were asked to indicate which of three very different lines had the same length as a target line, an easy call. The six stooges who answered before the participant each gave a patently wrong answer. When their turn came, three-quarters of the real participants defied their own eyeballs and went with the crowd.

But it takes more than the public endorsement of a private falsehood to set off the madness of crowds. Pluralistic ignorance is a house of cards. As the story of the Emperor’s New Clothes makes clear, all it takes is one little boy to break the spiral of silence, and a false consensus will implode. Once the emperor’s nakedness became common knowledge, pluralistic ignorance was no longer possible. The sociologist Michael Macy suggests that for pluralistic ignorance to be robust against little boys and other truth-tellers, it needs an additional ingredient: enforcement. People not only avow a preposterous belief that they think everyone else avows, but they punish those who fail to avow it, largely out of the belief – also false – that everyone else wants it enforced. Macy and his colleagues speculate that false conformity and false enforcement can reinforce each other, creating a vicious circle that can entrap a population into an ideology that few of them accept individually.

Why would someone punish a heretic who disavows a belief that the person himself or herself rejects? Macy et al. speculate that it’s to prove their sincerity – to show other enforcers that they are not endorsing a party line out of expedience but believe it in their hearts. That shields them from punishments by their fellows – who may, paradoxically, only be punishing heretics out of fear that they will be punished if they don’t.

The suggestion that unsupportable ideologies can levitate in midair by vicious circles of punishment of those who fail to punish has some history behind it. During witch hunts and purges, people get caught up in cycles of preemptive denunciation. Everyone tries to out a hidden heretic before the heretic outs him. Signs of heartfelt conviction become a precious commodity. Solzhenitsyn recounted a party conference in Moscow that ended with a tribute to Stalin. Everyone stood and clapped wildly for three minutes, then four, then five . . . and then no one dared to be the first to stop. After eleven minutes of increasingly stinging palms, a factory director on the platform finally sat down, followed by the rest of the grateful assembly. He was arrested that evening and sent to the gulag for ten years. People in totalitarian regimes have to cultivate thoroughgoing thought control lest their true feelings betray them. Jung Chang, a former Red Guard and then a historian and memoirist of life under Mao, wrote that on seeing a poster that praised Mao’s mother for giving money to the poor, she found herself quashing the heretical thought that the great leader’s parents had been rich peasants, the kind of people now denounced as class enemies. Years later, when she heard a public announcement that Mao had died, she had to muster every ounce of thespian ability to pretend to cry.

To show that a spiral of insincere enforcement can ensconce an unpopular belief, Macy, together with his collaborators Damon Centola and Robb Willer, first had to show that the theory was not just plausible but mathematically sound. It’s easy to prove that pluralistic ignorance, once it is in place, is a stable equilibrium, because no one has an incentive to be the only deviant in a population of enforcers. The trick is to show how a society can get there from here. Hans Christian Andersen had his readers suspend disbelief in his whimsical premise that an emperor could be hoodwinked into parading around naked; Asch paid his stooges to lie. But how could a false consensus entrench itself in a more realistic world?

The three sociologists simulated a little society in a computer consisting of two kinds of agents. There were true believers, who always comply with a norm and denounce noncompliant neighbors if they grow too numerous. And there were private but pusillanimous skeptics, who comply with a norm if a few of their neighbors are enforcing it, and enforce the norm themselves if a lot of their neighbors are enforcing it. If these skeptics aren’t bullied into conforming, they can go the other way and enforce skepticism among their conforming neighbors. Macy and his collaborators found that unpopular norms can become entrenched in some, but not all, patterns of social connectedness. If the true believers are scattered throughout the population and everyone can interact with everyone else, the population is immune to being taken over by an unpopular belief. But if the true believers are clustered within a neighborhood, they can enforce the norm among their more skeptical neighbors, who, overestimating the degree of compliance around them and eager to prove that they do not deserve to be sanctioned, enforce the norm against each other and against their neighbors. This can set off cascades of false compliance and false enforcement that saturate the entire society.

The analogy to real societies is not far-fetched. James Payne documented a common sequence in the takeover of Germany, Italy, and Japan by fascist ideologies in the 20th century. In each case a small group of fanatics embraced a “naïve, vigorous ideology that justifies extreme measures, including violence,” recruited gangs of thugs willing to carry out the violence, and intimidated growing segments of the rest of the populations into acquiescence.

Macy and his collaborators played with another phenomenon that was first discovered by Milgram: the fact that every member of a large population is connected to everyone else by a short chain of mutual acquaintances – six degrees of separation, according to the popular meme. They laced their virtual society with a few random long-distance connections, which allowed agents to be in touch with other agents with fewer degrees of separation. Agents could thereby sample the compliance of agents in other neighborhoods, disabuse themselves of a false consensus, and resist the pressure to comply or enforce. The opening up of neighborhoods by long-distance channels dissipated the enforcement of the fanatics and prevented them from intimidating enough conformists into setting off a wave that could swamp the society. One is tempted toward the moral that open societies with freedom of speech and movement and well-developed channels of communication are less likely to fall under the sway of delusional ideologies.

Macy, Willer, and Ko Kuwabara then wanted to show the false-consensus effect in real people – that is, to see if people could be cowed into criticizing other people whom they actually agreed with if they feared that everyone else would look down on them for expressing their true beliefs. The sociologists mischievously chose two domains where they suspected that opinions are shaped more by a terror of appearing unsophisticated than by standards of objective merit: wine-tasting and academic scholarship.

In the wine-tasting study, Macy et al. first whipped their participants into a self-conscious lather by telling them they were part of a group that had been selected for its sophistication in appreciating fine art. The group would now take part in the “centuries-old tradition” (in fact, concocted by the experimenters) called a Dutch Round. A circle of wine enthusiasts first evaluate a set of wines, and then evaluate one another’s wine-judging abilities. Each participant was given three cups of wine and asked to grade them on bouquet, flavor, aftertaste, robustness, and overall quality. In fact, the three cups had been poured from the same bottle, and one was spiked with vinegar. As in the Asch experiment, the participants, before being asked for their own judgments, witnessed the judgments of four stooges, who rated the vinegary sample higher than one of the unadulterated samples, and rated the other one best of all. Not surprisingly, about half the participants defied their own taste buds and went with the consensus.

Then a sixth participant, also a stooge, rated the wines accurately. Now it was time for the participants to evaluate one another, which some did confidentially and others did publicly. The participants who gave their ratings confidentially respected the accuracy of the honest stooge and gave him high marks, even if they themselves had been browbeaten into conforming. But those who had to offer their ratings publicly compounded their hypocrisy by downgrading the honest rater.

The experiment on academic writing was similar, but with an additional measure at the end. The participants, all undergraduates, were told they had been selected as part of an elite group of promising scholars. They had been assembled, they learned, to take part in the venerable tradition called the Bloomsbury Literary Roundtable, in which readers publicly evaluate a text and then evaluate each other’s evaluation skills. They were given a short passage to read by Robert Nelson, Ph.D., a MacArthur “genius grant” recipient and Albert W. Newcombe Professor of Philosophy at Harvard University. (There is no such professor or professorship.) The passage, called “Differential Topology and Homology,” had been excerpted from Alan Sokal’s “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity.” The essay was in fact the centerpiece of the famous Sokal Hoax, in which the physicist had written a mass of gobbledygook and, confirming his worst suspicions about scholarly standards in the postmodernist humanities, got it published in the prestigious journal Social Text.

The participants, to their credit, were not impressed by the essay when they rated it in private. But when they rated it in public after seeing four stooges give it glowing evaluations, they gave it high evaluations too. And when they then rated their fellow raters, including an honest sixth one who gave the essay the low rating it deserved, they gave him high marks in private but low marks in public. Once again the sociologists had demonstrated that people not only endorse an opinion they do not hold if they mistakenly believe everyone else holds it, but they falsely condemn someone else who fails to endorse the opinion. The extra step in this experiment was that Macy et al. got a new group of participants to rate whether the first batch of participants had sincerely believed that the nonsensical essay was good. The new raters judged that the ones who condemned the honest rater were more sincere in their misguided belief than the ones who chose not to condemn him. It confirms Macy’s suspicion that enforcement of a belief is perceived as a sign of sincerity, which in turn supports the idea that people enforce beliefs they don’t personally hold to make themselves look sincere. And that, in turn, supports their model of pluralistic ignorance, in which a society can be taken over by a belief system that the majority of its members do not hold individually.

If you’ve ever wondered what in the world could possibly have driven entire populations of ordinary people to abandon their humanity and become Nazis or Stalinists, or to commit acts of the most heinous violence on a mass scale, here’s your answer. Given the right incentives, even the most unassuming individuals can become part of a group that is crueler and more vindictive than any one of its particular members. And as bad as this effect is when it’s driven by forces like peer pressure and social incentives, it’s even worse when the participants genuinely believe that their enemies are so evil that violent action must be taken in order to stop them. Here’s another excerpt from Baumeister:

There are important implications of idealistic evil for the victims. Idealistic perpetrators believe they have a license, even a duty, to hate. They perceive the victim in terms of the myth of pure evil: as fundamentally opposed to the good, for no valid reason or even for the sheer joy of evil.

One implication is that ordinary restraints that apply even to severe conflicts may be waived. Holy wars tend to be more brutal and merciless than ordinary wars, and the reason for this is now apparent. When fighting against the forces of evil, there is no reason to expect a fair fight – and hence no reason to fight fair oneself. Idealists think they are up against a dangerous and powerful opponent who will stop at nothing to spread evil in the world, and so desperate and extreme measures are appropriate.

In a sense, this solves the problem of how ends justify means. If you are up against Satan, you should not expect the ordinary rules to apply. Murder may be acceptable if you are killing the most wicked and demonic enemies of the good; indeed, the state does that by executing the worst criminals and traitors. And the Bible is full of examples of how killing was all right when done in God’s name and in the service of divinely sanctioned causes. After all, it is only because of broad conceptions of goodness that murder is seen as wrong. If Satan is your enemy, you know that the fight is not going to be conducted in line with those notions of goodness. Satan cannot be expected to obey Christian morals and similar rules.

Another implication is that the victim’s options are slim. In instrumental evil, the victim can get off relatively easily by conceding whatever it is that the perpetrator wants. In idealistic evil, however, what the perpetrator often wants is that the victim be dead. The victim’s suffering is not one of many means to an end, but an essential condition for the (ostensible) triumph of good, and that leaves the victim with much less latitude to make a deal.

[…]

A key to understanding this link between idealism and violence is that high moral principles reduce the room for compromise. If two countries are fighting over disputed territory and neither can achieve a clear victory on the field, they may well make some kind of deal to divide the land in question between them. But it is much harder to make a deal with the forces of evil or to find some compromise in matters of absolute, eternal truth. You can’t sell half your soul to the devil.

This refusal to compromise is evident in the same examples we have discussed. The Thirty Years’ War was one of the most miserably ruinous wars ever fought, especially if one adjusts its devastation to account for the relatively primitive weapons in use. On several occasions, the war-weary sides were both ready to negotiate an end to it, but ideological commitments to one or the other version of Christianity scuttled the deal and sent everyone back to the battlefield.

Pinker reiterates this point:

Names like the “Thirty Years’ War” and the “Eighty Years’ War” […] tell us that the Wars of Religion were not just intense but interminable. The historian of diplomacy Garrett Mattingly notes that in this period a major mechanism for ending war was disabled: “As religious issues came to dominate political ones, any negotiations with the enemies of one state looked more and more like heresy and treason. The questions which divided Catholics from Protestants had ceased to be negotiable. Consequently … diplomatic contacts diminished.”

[…]

Ideologies, whether religious or political, push wars out along the tail of the deadliness distribution because they inflame leaders into trying to outlast their adversaries in destructive wars of attrition, regardless of the human costs.

And this is probably the biggest reason of all why ideological violence should be avoided as strenuously as possible. When you’ve got two sides who perceive themselves as fighting not just for practical goals, but for sacred values that can never be compromised, then the moment one side starts throwing punches, the severity of the conflict has nowhere to go but up – and the possibility of peaceful resolution disappears. As ContraPoints puts it:

Politics is based on a norm of reciprocity, which means I treat my opponents the way I expect them to treat me. If I start punching people out to silence that political speech, how can I reasonably expect them not to do the same to me?

And as Eneasz Brodski adds:

The reason I am against [even a white supremacist] getting punched, and everyone being so happy about it, is entirely just the rule of law. The thing that protects us from random people coming up and punching us because they disagree with our views is the fact that extrajudicial violence is not tolerated in our society. The reason that their Nazi groups don’t get to riot in the street and punch people is because we don’t accept that sort of thing. And once society does start accepting extrajudicial violence, that is when we get things like the deep South lynchings and those sorts of atrocities, because there is no protection anymore for the people who aren’t on the right side of the mob.

He continues:

There will always be crazy fuckers with awful ideas. You discredit them, and you rely on the laws to protect us from their violence. The law is what holds them back. It’s when the law fails to do so that things are dangerous (see: the South, up until just a few decades ago). That’s why I become worried when people gleefully cheer at the failure of the law to protect people from violence. If you think beating someone in the street will effectively discredit them and keep public opinion on your side, well, I think that’s a bad way to influence public opinion.

Again, there are some people who argue that they have no choice but to resort to violence, claiming that because their opponents hold beliefs that are inherently pro-violence, the very act of holding those beliefs is the equivalent of committing literal violence against them, so they are therefore justified in defending themselves by force. In rare cases, something like that actually can be true – if a Klansman or a jihadist, for instance, is standing in the middle of the town square and shouting that it’s time to start exterminating the Jews, so everybody grab your guns and let’s start hunting down every Jew in town – then yes, obviously that’s not a case where you’re going to be able to resolve it through calm, respectful dialogue (unless you happen to be a master negotiator). But in such cases, where someone’s speech really is extreme enough to incite others to commit violence, then that speech constitutes a crime and your response, for God’s sake, should be to call the police, not to blast the person on Twitter or try to smack them around a little bit yourself. If you really believe that what they’re saying poses a legitimate threat to the safety of others, then your duty is to alert the authorities so that they can stop whatever terrible act of violence is about to be committed. Otherwise – if, for instance, your opponents are just making resentful noises about how they wish somebody would put you out of their misery – then that’s deeply troubling, to be sure, but it’s not the equivalent of committing literal acts of violence against your side, and it doesn’t justify committing violent acts of your own in “self-defense” (especially considering the fact that most of the people on your side have probably made plenty of similar comments themselves, directed in the opposite direction). Your enemies’ ideas may be bad – so bad that they could lead to dangerous consequences at some point in the future – but if that’s the case, the point remains that they haven’t crossed the line into literal violence yet. We’re still talking about ideas here; and the way to handle ideas is through reasoned dialogue, not through force.

XII.

Thankfully, most of us aren’t caught up in life-or-death fights for our survival. When we go to political rallies or get into arguments with strangers on the internet, we don’t have to worry about the possibility that our opponents will retaliate by sending soldiers to our houses to kick our doors down and threaten our lives. This is an incredibly good thing; compared to the billions of other people throughout human history who haven’t been so lucky (and some who still aren’t), we are part of a very privileged minority, and we should be grateful for the degree of peaceable coexistence we’re able to enjoy.

Unfortunately though, for a lot of people it’s almost disappointing not to be able to fight in the kind of high-stakes life-and-death struggles that their forebears went through. (Recall 8footpenguin’s reflections earlier about yearning to be part of something bigger than yourself.) There’s just something about that heroic good vs. evil narrative that’s impossible to shake; that deep inner need to think of yourself as someone who bravely fights for good is just too compelling. So given the lack of an actual flesh-and-blood conflict to fight in, a lot of people maintain their combative mentality anyway, and just “wage war” in a metaphorical sense, by making the “battle of ideas” their battleground. They may not be able to fight in a real war, but they can at least convince themselves that what they’re doing is the same thing. But the result of this attitude, as Alexander points out, is that in areas like US politics, “liberals and conservatives are seeming less and less like people who happen to have different policy prescriptions, and more like two different tribes engaged in an ethnic conflict quickly approaching Middle East level hostility.”

This is not a good sign, to say the least; and the patterns of behavior that have emerged around these antagonisms – the sharp escalation in “outrage culture,” people wanting to forcibly silence opposing ideas rather than constructively engage with them, and so forth – can feel distinctly worrying. Of course, it’s not that they’re completely without historical precedent. These impulses have always been present in people’s attitudes at some level; and in fact, recent surveys suggest that modern attitudes toward banning opposing ideas are basically in line, percentage-wise, with those of previous generations. It’s true that a troublingly large proportion of people nowadays are in favor of prohibiting speech they don’t like – but this is equally true among different age groups, and has been just as true in previous eras as it is today, if not more so.

So for instance, as frightening as a lot of right-wing movements have recently become in their aggression toward racial and religious minorities, feminists, people they perceive as socialists, and so on, it’d be a major mistake to act like these ugly impulses were some completely new phenomenon that we’d never had to deal with before. As recently as a few decades ago, you could have gone to prison (or worse) if you dared to advocate for socialism. Voting while female or non-white was literally illegal. Even just openly existing as a gay person meant that you were running a serious risk to your life. If nothing else, you could count on being met with an absolute firestorm of backlash – ostracism, public shaming, threats of violence, etc. – if you tried to advocate for the “wrong” ideas, like not demonstrating enough nationalism or not believing in the Christian religion strongly enough. As true as these things often are today, unfortunately, they’re just extensions of patterns that have always existed.

Similarly, if you’re someone who thinks that the excesses of “political correctness” from the modern left are completely unprecedented, you might also recall the ideological landscape during, say, the 60s and 70s – or for that matter, the late 80s and early 90s. Here’s Chuck Klosterman (writing shortly before the modern-day culture wars blew up so dramatically), describing that latter era:

I’m reticent to use the term “political correctness.” I realize it drives certain people really, really crazy. (My wife is one of these people.) However, there isn’t a better term to connote the primary linguistic issue in America from (roughly) 1986 to 1995. Today, the phrase “political correctness” is mostly used as a quaint distraction. No one takes it too seriously. It feels like something that only matters to Charles Krauthammer. But in 1990, that argument was real. If you cared about ideas, you had to deal with it. At the debate’s core, a meaningful philosophical question was exposed and dissected: If someone is personally offended by a specific act, does that alone qualify the act as collectively offensive? It’s a problem that’s essentially unsolvable. But what made things so insane in 1990 was the degree to which people worried about how this question would change everything about society. Up until the mid-eighties, there was always a shared assumption that the Right controlled the currency of outrage; part of what made conservatives “conservative” was their discomfort with profanity and indecency and Elvis Presley’s hips. But then – somewhat swiftly, and somehow academically – it felt as if the Left was suddenly dictating what was acceptable to be infuriated over (and always for ideological motives, which is why the modifier “politically” felt essential). This created a lot of low-level anxiety whenever people argued in public. Every casual conversation suddenly had the potential to get someone fired. It was a great era for white people hoping to feel less racist by accusing other white people of being very, very racist. A piece of art could be classified as sexist simply because it ignored the concept of sexism. Any intended message mattered less than the received message, and every received message could be interpreted in whatever way the receiver wanted. So this became a problem for everybody. It was painlessly oppressive, and the backlash was stupid and adversarial.

And here are still more examples, this time given by Pinker in his discussion of the tribulations suffered by behavioral scientists in previous decades. Again, what these cases illustrate is that the phenomenon of activists and academics getting overly riled up about ideas they find offensive isn’t something that’s just emerged recently; it’s been around for a long time:

The topic of innate differences among people has [always produced considerable controversy]. But some scholars [in the 1960s and 70s] were incensed by the seemingly warm-and-fuzzy claim that people have innate commonalities. In the late 1960s the psychologist Paul Ekman discovered that smiles, frowns, sneers, grimaces, and other facial expressions were displayed and understood worldwide, even among foraging peoples with no prior contact with the West. These findings, he argued, vindicated two claims that Darwin had made in his 1872 book The Expression of the Emotions in Man and Animals. One was that humans had been endowed with emotional expressions by the process of evolution; the other, radical in Darwin’s time, was that all races had recently diverged from a common ancestor. Despite these uplifting messages, Margaret Mead called Ekman’s research “outrageous,” “appalling,” and “a disgrace” – and these were some of the milder responses. At the annual meeting of the American Anthropological Association, Alan Lomax Jr. rose from the audience shouting that Ekman should not be allowed to speak because his ideas were fascist. On another occasion an African American activist accused him of racism for claiming that black facial expressions were no different from white ones. (Sometimes you can’t win.) And it was not just claims about innate faculties in the human species that drew the radicals’ ire, but claims about innate faculties in any species. When the neuroscientist Torsten Wiesel published his historic work with David Hubel showing that the visual system of cats is largely complete at birth, another neuroscientist angrily called him a fascist and vowed to prove him wrong.

[…]

Some of these [kinds of] protests were signs of the times and faded with the decline of radical chic. But the reaction to [certain] books on evolution continued for decades and became part of the intellectual mainstream.

[Foremost among these books] was E. O. Wilson’s Sociobiology, published in 1975. Sociobiology synthesized a vast literature on animal behavior using new ideas on natural selection from George Williams, William Hamilton, John Maynard Smith, and Robert Trivers. It reviewed principles on the evolution of communication, altruism, aggression, sex, and parenting, and applied them to the major taxa of social animals such as insects, fishes, and birds. The twenty-seventh chapter did the same for Homo sapiens, treating our species like another branch of the animal kingdom. It included a review of the literature on universals and variation among societies, a discussion of language and its effects on culture, and the hypothesis that some universals (including the moral sense) may come from a human nature shaped by natural selection. Wilson expressed the hope that this idea might connect biology to the social sciences and philosophy, a forerunner of the argument in his later book Consilience.

[…]

Following a favorable review in the New York Review of Books by the distinguished biologist C. H. Waddington, the “Sociobiology Study Group” (including two of Wilson’s colleagues, the paleontologist Stephen Jay Gould and the geneticist Richard Lewontin) published a widely circulated philippic called “Against ‘Sociobiology.’” After lumping Wilson with proponents of eugenics, Social Darwinism, and [Arthur] Jensen’s hypothesis of innate racial differences in intelligence, the signatories wrote:

The reason for the survival of these recurrent determinist theories is that they consistently tend to provide a genetic justification of the status quo and of existing privileges for certain groups according to class, race, or sex…. These theories provided an important basis for the enactment of sterilization laws and restrictive immigration laws by the United States between 1910 and 1930 and also for the eugenics policies which led to the establishment of gas chambers in Nazi Germany.

…What Wilson’s book illustrates to us is the enormous difficulty in separating out not only the effects of environment (e.g., cultural transmission) but also the personal and social class prejudices of the researcher. Wilson joins the long parade of biological determinists whose work has served to buttress the institutions of their society by exonerating them from responsibility for social problems.

They also accused Wilson of discussing “the salutary advantages of genocide” and of making “institutions such as slavery… seem natural in human societies because of their ‘universal’ existence in the biological kingdom.” In case the connection wasn’t clear enough, one of the signatories wrote elsewhere that “in the last analysis it was sociobiological scholarship … that provided the conceptual framework by which eugenic theory was transformed into genocidal practice” in Nazi Germany.

One can certainly find things to criticize in the final chapter of Sociobiology. We now know that some of Wilson’s universals are inaccurate or too coarsely stated, and his claim that moral reasoning will someday be superseded by evolutionary biology is surely wrong. But the criticisms in “Against ‘Sociobiology’” were demonstrably false. Wilson was called a “determinist,” someone who believes that human societies conform to a rigid genetic formula. But this is what he had written:

The first and most easily verifiable diagnostic trait [about human societies] is statistical in nature. The parameters of social organization … vary far more among human populations than among those of any other primate species…. Why are human societies this flexible?

Similarly, Wilson was accused of believing that people are locked into castes determined by their race, class, sex, and individual genome. But in fact he had written that “there is little evidence of any hereditary solidification of status” and that “human populations are not very different from one another genetically.” Moreover:

Human societies have effloresced to levels of extreme complexity because their members have the intelligence and flexibility to play roles of virtually any degree of specification, and to switch them as the occasion demands. Modern man is an actor of many parts who may well be stretched to his limit by the constantly shifting demands of the environment.

As for the inevitability of aggression – another dangerous idea he was accused of holding – what Wilson had written was that in the course of human evolution “aggressiveness was constrained and the old forms of primate dominance replaced by complex social skills.” The accusation that Wilson (a lifelong liberal Democrat) was led by personal prejudice to defend racism, sexism, inequality, slavery, and genocide was especially unfair – and irresponsible, because Wilson became a target of vilification and harassment by people who read the manifesto but not the book.

At Harvard there were leaflets and teach-ins, a protester with a bullhorn calling for Wilson’s dismissal, and invasions of his classroom by slogan-shouting students. When he spoke at other universities, posters called him the “Right-Wing Prophet of Patriarchy” and urged people to bring noisemakers to his lectures. Wilson was about to speak at a 1978 meeting of the American Association for the Advancement of Science when a group of people carrying placards (one with a swastika) rushed onto the stage chanting, “Racist Wilson, you can’t hide, we charge you with genocide.” One protester grabbed the microphone and harangued the audience while another doused Wilson with a pitcher of water.

As the notoriety of Sociobiology grew in the ensuing years, Hamilton and Trivers, who had thought up many of the ideas, also became targets of picketers, as did the anthropologists Irven DeVore and Lionel Tiger when they tried to teach the ideas. The insinuation that Trivers was a tool of racism and right-wing oppression was particularly galling because Trivers was himself a political radical, a supporter of the Black Panthers, and a scholarly collaborator of Huey Newton’s. Trivers had argued that sociobiology is, if anything, a force for political progress. It is rooted in the insight that organisms did not evolve to benefit their family, group, or species, because the individuals making up those groups have genetic conflicts of interest with one another and would be selected to defend those interests. This immediately subverts the comfortable belief that those in power rule for the good of all, and it throws a spotlight on hidden actors in the social world, such as females and the younger generation. Also, by finding an evolutionary basis for altruism, sociobiology shows that a sense of justice has a deep foundation in people’s minds and need not run against our organic nature. And by showing that self-deception is likely to evolve (because the best liar is the one who believes his own lies), sociobiology encourages self-scrutiny and helps undermine hypocrisy and corruption.

[…]

Trivers later wrote of the attacks on sociobiology, “Although some of the attackers were prominent biologists, the attack seemed intellectually feeble and lazy. Gross errors in logic were permitted as long as they appeared to give some tactical advantage in the political struggle…. Because we were hirelings of the dominant interests, said these fellow hirelings of the same interests, we were their mouthpieces, employed to deepen the [deceptions] with which the ruling elite retained their unjust advantage. Although it follows from evolutionary reasoning that individuals will tend to argue in ways that are ultimately (sometimes unconsciously) self-serving, it seemed a priori unlikely that evil should reside so completely in one set of hirelings and virtue in the other.”

[…]

Behavioral science is not for sissies. Researchers may wake up to discover that they are despised public figures because of some area they have chosen to explore or some datum they have stumbled upon. Findings on certain topics – daycare, sexual behavior, childhood memories, the treatment of substance abuse – may bring on vilification, harassment, intervention by politicians, and physical assault. Even a topic as innocuous as left-handedness turns out to be booby-trapped. In 1991 the psychologists Stanley Coren and Diane Halpern published statistics in a medical journal showing that lefties on average had more prenatal and perinatal complications, are victims of more accidents, and die younger than righties. They were soon showered with abuse – including the threat of a lawsuit, numerous death threats, and a ban on the topic in a scholarly journal – from enraged lefthanders and their advocates.

So again, in historical terms, this whole trend of outrage-fueled hyper-reactivity isn’t anything new. Is it worse this time around, though? It often feels that way from the inside – although if you remember the 1992 Los Angeles riots, for instance, I’m sure you could make a good case that the modern-day version is actually a hell of a lot tamer than its earlier incarnations. Certainly it’s not nearly as bad as the most egregious examples from the more distant past, like the Reign of Terror or the Cultural Revolution or what have you (not that that’s saying much.)

Still, regardless of how bad it may be on an absolute scale at the moment, it also matters which direction it’s going – whether it’s getting better or worse – and how sharply. And it does seem like the toxicity has started trending in a dramatically worse direction in recent years, as well as getting more widespread, with increasing numbers of people getting sucked into it due to the rising influence of the internet, cable news, and social media. And I think that’s a factor that really is new and unprecedented, and makes the modern-day situation different from previous ones in certain significant ways.

For one thing – and I touched on this briefly earlier but want to mention it again here – these new forms of media tend to amplify everything to a much more extreme scale, in a way that wasn’t really possible in previous generations. Events that might otherwise have gone relatively unnoticed can blow up and turn into massive national fiascoes as everybody circulates and shares them. (Just think about how much pandemonium there’d be in response to a YouTube clip of those anti-Wilson demonstrations if they happened today, for instance.) In that same vein, whereas in previous generations these events might only have received commentary from a handful of career journalists and commentators – most of whom followed codes of conduct requiring them to remain professional in their coverage (i.e. to uphold norms of accuracy and objectivity and equanimity) – now everyone has a platform to weigh in and give their opinion, and those professional standards have accordingly been crowded out in favor of unconstrained hot takes, overreactions, and exaggerations from anyone and everyone willing to fire them off. It’s one thing to have a situation in which the news is mostly covered by level-headed journalists, with only a few scattered demagogues like Rush Limbaugh chiming in from the periphery; but it’s another thing altogether to have a situation in which not only do those demagogues have their own platforms, but every single one of their millions of listeners does as well. It shouldn’t exactly be surprising that adding millions of angry Rush Limbaugh fans to the conversation might make the general tone more toxic overall (especially if you’re adding millions of equally angry Limbaugh-hating liberals to the conversation at the same time). Of course, for the most part, it’s a good thing that more people can now participate – I’m inclined to think that despite everything, the internet and social media have tended to make the smartest and most understanding people even smarter and more understanding – but the flip side of this is that there’s now also a lot more misinformation and flat-out nastiness drowning out the high-quality content for those who aren’t discerning enough to filter it out.

And this gets at another important point about the internet that’s fairly new and unique: The typical constraints on page space and airtime that exist in traditional media are essentially nonexistent online, which means that conversations can basically be dominated by whoever’s willing to put in the time to dominate them. If there’s a certain subset of people with a particular personality type – i.e. those “keyboard warrior” types who spend all their free time online and are unusually forceful and persistent in asserting their opinions at every possible opportunity because they lack the standard social skills and etiquette to exercise any kind of restraint – then those are the people who will ultimately make up the majority of every conversation. (Case in point: NPR recently removed the comment sections from their website after realizing that the majority of comments were being posted by a small handful of users posting hundreds of times a month.) Zach Weinersmith’s comic strip below does a good job illustrating how these kinds of hyper-vocal extremists can dominate the discourse simply by being more aggressive than everyone else in making themselves heard:

Untitled

Of course, as distressing as it is that the extremists have taken over the public conversation so dramatically in recent years, this comic strip also serves as a valuable reminder that just because the discourse has gotten so much more unhinged lately doesn’t necessarily mean that the population as a whole has become irredeemably extreme; it may just be that we’re hearing more from the extremists now than we were before. This is something that’s probably worth bearing in mind if you find yourself getting too worked up about how insane the online world seems to have gotten nowadays.

And while we’re on the subject, another factor to keep in mind when evaluating the quality of modern online discourse is that a lot of times, the people dominating a particular conversation might be, say, 14 years old – but because the anonymity of the internet hides their identity, nobody ever actually realizes that they’re 14 years old. It can be easy to forget about demographics when you’re arguing with anonymous strangers online, after all; but young people are disproportionately represented on the internet and social media, so as more and more of our discourse shifts to those platforms, you should expect that more and more of the people you’re debating with will belong to that younger demographic (as well as other demographics that have historically been excluded from the mainstream conversation but can now chime in more freely, like people with severe mental illnesses). If you find yourself getting frustrated with someone who’s acting childish and immature toward you in an online debate, it’s probably worth considering that they may, in fact, still be a literal child (or someone with an equivalent level of mental/emotional development). Likewise for a lot of the college students whose radical activism has been making headlines and rubbing a lot of people the wrong way lately; it’s true that many of them appear to lack emotional maturity, but that’s because they’re still maturing – most of them are just a year or two removed from still legally being children. This isn’t to say that you should discount their opinions just because of their age, of course; after all, one of the most important political activists of our time, Malala Yousafzai, was 15 when her activism made her a global icon – and history is full of other young people leading the charge on major social changes (the Student Nonviolent Coordinating Committee, a group of college students that helped shape the civil rights movement, is another good example). But it’s at least something to bear in mind when trying to understand where some of these more extreme views are coming from. For a lot of these young people, they’re at a point in their lives where they’re just realizing for the first time how important certain social justice issues are, so there’s more of a feeling of urgency towards these issues than you find in most adults. The more they learn about these issues, the more eager they are to speak out on them – and to be sure, their eagerness can often be a crucial catalyst for producing real social change, particularly in cases when the older generations are too set in their ways to take the initiative themselves. But sometimes, due to a lack of experience, young activists can come out swinging a bit too zealously, and it’s only after a period of trial and error that they adjust down to a more reasonable level of outrage and theatricality. This is a learning process that almost everybody has to go through at some point (Who didn’t have some kind of radical phase during their youth that they later outgrew?), but in the meantime you have to be willing to allow for a bit more extremism and heterodoxy than usual when it comes to student politics; that kind of thing just comes with the territory of being young and full of passion. You don’t have to agree with it, of course, but you should understand where it’s coming from and expect to see more of it in contexts like college campuses and on certain websites with disproportionately young user bases (Tumblr, Twitter, Reddit, etc.). On that same note, when you’re browsing these sites, you should bear in mind that because a lot of the people you’re interacting with are probably a lot younger than you’re imagining them, it’s worth resisting the urge to take your own social cues from whatever immature behavior they might display. Thinking that everybody you’re interacting with is a grown adult can mislead you into thinking that certain childish behaviors are standard and expected – that this must just be how everyone interacts now – when in fact they’re only a product of young people acting their age. I think this is an underappreciated part of the reason why online discourse tends to have such an adolescent tone; the tone is largely being set by adolescents. And older people are unwittingly following their lead – so by thinking that this adolescent tone is the new normal, it becomes the new normal. But this isn’t how it has to be.

Speaking of which, you should also bear in mind that a lot of the more inflammatory stuff you see online is just teenagers trolling for fun – i.e. being deliberately provocative just to see if they can get a rise out of people. In contrast to the subset of young people who take their causes very seriously and will often get incensed when they don’t think others are taking them seriously enough, there’s another subset of young people (sort of an exaggerated version of the “jesters” mentioned earlier) who pride themselves on never getting offended by anything, and who frankly find the whole concept of getting “offended” ridiculous – so for them, there’s nothing more hilarious than to trigger a hysterical pearl-clutching hissy fit by whichever group they find the most overreactive (whether that be conservative religious “fundies” getting offended by casual profanity/nudity/etc. or liberal “social justice warriors” getting offended by casual racism/sexism/homophobia/etc.). To some extent, this kind of boundary-pushing is pretty standard behavior for young people; I’m sure you can remember various occasions as a teenager when you and your friends competed to see who could tell the most offensive jokes or pull off the most outrageous pranks. And as we’ve seen, there are obviously also plenty of adults who never really outgrow that adolescent mentality. Having said that though, again, you shouldn’t completely discount these people just because they’re acting immaturely; even if they’re making a mockery of a topic you take seriously, that doesn’t mean that you should jump down their throats or try to destroy them. I suspect there are a surprisingly large number of people online who are currently hardcore ideologues, but who originally started off just as casual trolls and only came to genuinely identify with the things they were saying after repeated clashes with the other side. You can imagine, for instance, some young kid going online and posting blatantly fascist opinions in an attempt to wind people up; at first it’s all in good fun – “Haha, I can’t believe how seriously people are taking this stuff; I sure am making them look ridiculous with all these hysterical reactions they’re having.” But gradually, as he receives more and more extreme backlash against his trolling, his amusement starts to turn into genuine incredulity at how vindictive his dupes can be – “Jesus, these people really are crazy; I really can’t believe how seriously they’re taking this. I’m just making jokes, for God’s sake; why are being they so oversensitive?” As he finds himself getting drawn into actual debates with some of them, he starts pushing back against them for real, not necessarily because he actually believes what he’s saying (yet), but more just out of sheer annoyance with how legitimately unreasonable and ridiculous they’re being. The more of these exchanges he has, though, the more serious he gets – until eventually, he finds himself looking up fascist talking points not because he’s just flippantly trying to troll people anymore, but because he’s actually trying to win arguments. And the people he starts looking to for support – the only people who understand what he’s experiencing – are other fascists, who radicalize him even further until ultimately he ends up fully committed to the cause.

Alexander describes how this can feel for the person experiencing it:

[If you’re criticizing popular opinions, you’ll find that] the good, righteous people [you’re arguing against] are not used to being argued against. [Even if you’re just joking around to begin with,] they round you off to a Bad, Unrighteous Person. It is unpleasant. And when this has happened enough, you start viewing the Good Righteous People as your enemies. You start feeling like even when they haven’t said anything too objectionable yet, it’s just a matter of time. You live in fear of waking up every day, seeing a smug self-congratulatory image macro about how stupid everyone who opposes [their ideology] is on your Facebook feed, and having the whole thing start over again.

And once you view the Good Righteous People as your enemies, you start viewing the Bad Unrighteous people as a sort of friend. Bad, unrighteous friends. But at least they sometimes stand up for you when no one else will.

And then you start to become Bad and Unrighteous yourself.

On a less extreme scale, Andrew Marantz describes how a similar experience turned one activist, Cassandra Fairbanks, from a classic left-winger into a full-fledged Trump diehard:

For the first half of 2016, she supported Bernie Sanders; when he dropped out, she was conflicted. “I couldn’t possibly support Hillary, I knew that,” she said. She considered Jill Stein, but concluded that Stein didn’t have enough charisma to win. (“No one wants to elect their weird yoga teacher who smells like cat urine.”) So she turned to Trump. “I was still working for these sites that were saying terrible things about him, but when I listened for myself I thought, His message makes sense.” She appreciated Trump’s opposition to political correctness, and his willingness, after the Orlando shootings, to focus on terrorism instead of on gun control. “I started saying a few pro-Trump things on Twitter, and people absolutely lost their shit,” she said. “I got called a literal Nazi so many times, I eventually went, Fuck it, I’ll just go all in.” She now writes for Sputnik, a news site funded by the Russian government.

The point is, even if your opponents act in a way that’s completely out of line, that doesn’t mean that your response should be to sink to that level yourself; someone who’s not actually that much of an enemy might very well become one if you push them far enough. Even if it’s true that a large proportion of the people taking part in these culture wars are just immature teenagers and internet trolls whose extremism should be taken with a grain of salt, it still doesn’t mean that the effects of this toxic discourse should just be ignored, or that all the ugliness should be allowed to persist, or that you should indulge in it yourself. Trying to do better in this one area may be the most important thing we can do to enable ourselves to deal with every other important issue more effectively, so it’s undoubtedly worth making the effort. Complaining about how terribly everyone is acting is easy – but as the old saying goes, “it’s better to light one candle than to curse the darkness.”

So now let’s shift gears, and talk about how to improve the situation.

XIII.

The first thing, if you find yourself getting fed up with all the toxicity, is not to let it discourage you from participating in these conversations in the first place. That’s how toxicity wins – by grinding down all the reasonable people until they’re too exhausted to continue and just opt out of the discourse entirely. No doubt, it can be very tempting at times to just throw up your hands and say, “To hell with it, I’m done trying to reason with you people.” But the thing is, once you’ve decided you’re done trying to reason with your opponents, you’ve removed what might be the only voice of reason they’re hearing. Engaging with people you disagree with – even people you vehemently disagree with, whose views completely disgust you – is the only way to ensure that they’re actually being exposed to different ideas that might have a moderating influence on their own opinions. Merely washing your hands of them and leaving them to their own devices just means that they’ll settle further into what they already believe, since the only people they’ll be hearing from at that point are those who already share their views.

Sure, it’s possible that you won’t get anywhere when you try to engage with them. That may even be the most likely outcome – and in certain truly intractable situations, it really may be the right move to strategically cut your losses and move on to more productive discussions. But you shouldn’t fool yourself into thinking that this is an actual solution to your disagreement. The solution to bad discourse is good discourse, not no discourse. And if you’re able to keep your composure, stay patient, and not slip into the pitfalls of bad discourse yourself, occasionally you really can have an actual influence on those who disagree with you. Ian Danskin provides a parable of sorts:

I celebrated New Year’s 2014 in the Boston Common with my girlfriend. And after ringing in the new year, we took the 2 AM train back towards her place in Providence. And, you can imagine: 2 AM, New Year’s Day, the train had a fair share of loud, drunk 20-somethings. And after a half hour or so of in-your-face revelry, a dude at the front of the train turned around and yelled at them, “You need to keep it down. It’s late, we’re tired, you’re not the only people in the world, much less this train.” And the exact moment he finished talking, someone yelled, “Eat a dick, man!” and they all laughed him down and got back to celebrating. And yes, that’s [antagonistic discourse] in a nutshell: responding to “Maybe be a decent person” with “Go fuck yourself” and going right back to business as usual. But here’s the thing: After a couple minutes, the train car quieted down. Because the air was sucked out. Everyone was self-conscious now. It’s one thing to blithely keep the party going without thinking how it affects others, and another when you know full well how it affects them, because they just told you. Something that was ignorant is now spiteful. And spite is harder to sustain. It takes effort.

Of course, not everyone responds to opposing viewpoints by becoming more self-conscious and ambivalent about their own viewpoints. Given how widespread the norms of antagonistic discourse have become, it’s all too common for people to gleefully double down on their spite and throw it in your face when challenged (at least at first). But every now and then, if done right, speaking up and speaking out for what you believe genuinely can make a difference. It may be easy to feel otherwise – after all, you’re only one person; how much difference can you really make? And it’s not like you’re the only person in the world who can speak out for your side; even if you don’t step up, surely somebody else will, right? As Roger Fisher and William Ury write:

Sometimes people seem to prefer feeling powerless and believing that there is nothing they can do to affect a situation. That belief helps them avoid feeling responsible or guilty about inaction. It also avoids the costs of trying to change the situation — making an effort and risking failure, which might cause the person embarrassment.

But in an age where contentious issues are often decided by the tiniest of margins, even the smallest contribution to the discourse can matter in the bigger picture. As Alexander puts it:

Improving the quality of debate, shifting people’s mindsets from transmission to collaborative truth-seeking, is a painful process. It has to be done one person at a time, it only works on people who are already almost ready for it, and you will pick up far fewer warm bodies per hour of work than with any of the other methods. But in an otherwise-random world, even a little purposeful action can make a difference. Convincing 2% of people would have flipped three of the last four US presidential elections. And this is a capacity to win-for-reasons-other-than-coincidence that you can’t build any other way.

(and my hope is that the people most willing to engage in debate, and the ones most likely to recognize truth when they see it, are disproportionately influential – scientists, writers, and community leaders who have influence beyond their number and can help others see reason in turn)

The truth is, some of the biggest social changes in history have come about simply because one person decided to share their ideas with others, and did so in a way that turned out to be extremely convincing. Sometimes this has been a great thing, other times not so much; but the impact is undeniable. Just imagine, for instance, how different the world would be today if Karl Marx had never decided to write down and share his ideas. Or imagine if Ayn Rand had never written down her ideas and shifted things in the opposite ideological direction. Or imagine if Harriet Beecher Stowe had never written Uncle Tom’s Cabin and so profoundly aroused popular sentiments against slavery. Or imagine if Upton Sinclair had never written The Jungle and transformed the national dialogue around workplace health and safety. Or imagine if Rachel Carson had never written Silent Spring and inspired the modern environmentalist movement. The list goes on. In each of these cases, a single person managed to change their whole society, simply by presenting their views in a compelling way. And this demonstrates a crucial point: Ideas really do matter. Presenting an idea in a way that the right people find convincing – whether those people be an entire population of voters or just a few powerful decision-makers – can make all the difference in the world. “Indeed,” as John Maynard Keynes (the most influential economist of the 20th century) once noted, “the world is ruled by little else.”

Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back. I am sure that the power of vested interests is vastly exaggerated compared with the gradual encroachment of ideas. Not, indeed, immediately, but after a certain interval; for in the field of economic and political philosophy there are not many who are influenced by new theories after they are twenty-five or thirty years of age, so that the ideas which civil servants and politicians and even agitators apply to current events are not likely to be the newest. But, soon or late, it is ideas, not vested interests, which are dangerous for good or evil.

And Friedrich Hayek, arguably the second-most influential economist of the 20th century, agreed:

Society’s course will be changed only by a change in ideas. First you must reach the intellectuals, the teachers and writers [and today that might also include TV and internet thought leaders], with reasoned argument. It will be their influence on society which will prevail, and the politicians will follow.

(To see just how prescient this turned out to be, I highly recommend these two great posts on the Mont Pelerin Society (to which Hayek belonged) and the Fabian Society – two tiny, relatively obscure groups whose quiet influence helped shape our entire socio-economic order into what it is today.)

For all the emphasis we place on the actions of politicians and activists when discussing how history is changed, those individuals’ actions – their votes and demonstrations and proposed laws and so on – are generally more an expression of the zeitgeist than a shaper of it. What actually drives their actions are the ideas that influence them. And that means that when it comes to changing the world, the thing that makes the biggest difference is simply spreading the right ideas to the right people, and doing so in a way that they actually find compelling. The key point here, though, as I’ve been stressing this whole time, is that those people actually have to be convinced; they can’t just be browbeaten into joining the cause.

And this brings us back to another important point from before – that once you’ve decided to engage in the discourse, merely joining the fray isn’t enough; you have to be smart about how you engage. It’s important not to shy away from grappling with opposing views – but it’s just as important not to overcorrect too far in the other direction by taking an ultra-combative approach that embraces hostile conflict. You want to make sure you’re participating in a way that’s actually constructive, not simply deepening the hostility between the two sides.

Most obviously, this means rejecting violence as a tactic. Sure, there have been sporadic cases of mass movements achieving their goals through the use of force in the past (and causing widespread death and destruction in the process) – but these instances almost universally took place before the advent of the modern-day free press and mass media that we have today. Now that we live in a time where information is so ubiquitous, conflicts aren’t just decided by whichever side fights the hardest; they’re decided by whichever side is best able to control the narrative and convince people that they’re right. It’s not a question of who’s physically stronger anymore; it’s all about public persuasion and optics now. And this is a good thing – it means that righteous underdogs have a better chance of winning than they used to. But it also means that if they aren’t smart in their strategy – i.e. if they adopt the same violent tactics that they’re protesting against – then they’re shooting themselves in the foot and doing more harm to their cause than good (to paraphrase Christian Picciolini).

Take the aforementioned example of leftists using violence to disrupt white supremacist rallies, for instance. As Robinson writes:

[In indulging violent impulses,] we might punch a Nazi, and feel pleased and victorious as we watch him bleed and cry, [but what we don’t realize is] that we have just made him 100 times more determined and vengeful, and have pushed his previously unsympathetic friends another inch closer to the far right position.

[…]

Consider Natasha Lennard’s article on “making Nazis afraid.” […] Lennard’s theory appears to be that the only way to prevent the rise of fascism is through violently attacking those who support it, and that if any violent anti-fascist group failed, it was simply because they did not have enough people committing enough violence. The solution is always, then, more violence. If you see white supremacist groups growing, you need greater numbers of people “besieging” them.

But what if this is wrong? What if, in fact, violent besiegings do contribute to an escalating cycle of violence? What if, here and now, they do serve as a formidable recruiting tool? I hope we’re certain that this is not the case, because if we’re wrong the consequences could be disastrous. I even see people on the left citing Hitler himself, who said that “only one thing could have stopped our movement – if our adversaries had understood its principle and from the first day smashed with the utmost brutality the nucleus of our new movement.” But are we certain that we should be trusting Adolf Hitler as an authority on what to do about Nazis? Hitler believed in a world in which only “superior brutality” could ensure political success. And, certainly, it worked for him – it just also left tens of millions of people dead. Personally I believe we should make sure we have exhausted every other possible option before resorting to “utmost brutality” and I would like to know why people think we have exhausted the other options.

I’m not confident that Lennard is taking the difficult questions seriously. For instance, she says that “white supremacy has never receded because it was asked politely.” But this is a facile and unfair description of those who question the utility of violence. Nobody is saying you should ask white supremacy politely to go away. The battle is not for the hearts and minds of white supremacists, but for the hearts and minds of the general public. Many on the left who take Lennard’s position believe that those who call for nonviolence are suggesting you can “debate white supremacism out of existence.” That’s not the case, though. What they say is that you win more public supporters by making your case through a clear and well-organized communication of your ideas than through showing up to right-wing events and hitting people with clubs. “Nazis don’t listen to reason,” people scoff. No, but people who are not Nazis might listen to reason, and the important thing is to make sure that the Nazis are marginal by keeping the vast majority of people on your side rather than driving anyone else toward theirs. “Fascism cannot be defeated by speech.” But how the hell do they know this? Nazi Germany couldn’t be defeated by speech. But a nascent and tiny group of fringe racists? I have more confidence than many on the left in the power of left-wing ideas to defeat pseudoscience and bigotry. And I’m always amazed that people give up on the value of communicating anti-racist ideas using reason and rhetoric even before they have actually tried it. (Also, if we’re being honest, some white supremacists can actually be convinced to drop their ideology. Former KKK “prodigy” R. Derek Black was slowly drawn away from his father’s racist belief system thanks to patient and caring liberal classmates, and recently wrote an essay on how shameful America’s racial history is.)

[…]

Many of the arguments I’ve seen against nonviolence […] have not been especially persuasive. Sometimes they’re just sophistry: this Washington Post op-ed suggests that “violence was critical to the success of the 1960s civil rights movement, as it has been to every step of racial progress in U.S. history.” The author’s justification for this statement is that Martin Luther King intentionally provoked violence from police and white supremacists in order to demonstrate the violence inherent in the U.S. racial hierarchy. But using this to say that “violence was critical” to the civil rights movement is odd, because it implies that the civil rights movement itself was violent, when it wasn’t. One can blur distinctions, but the civil rights movement simply did not deploy aggressive violence against its opponents.

The usual response here is to invoke Malcolm X: Martin Luther King’s nonviolence, it is said, only worked because whites preferred to deal with the nonviolent Martin rather than the non-nonviolent Malcolm. And that’s true: King succeeded in part because of a tacit “good cop/bad cop” dynamic between himself and more radical black activists. Endorsing King’s nonviolence without understanding the full range of tactics used in the pursuit of black liberation is a selective reading of history. But flattening Malcolm X into little more than a “scary, violence-advocating counterpart” to MLK is no less misleading. Something that is very rarely noted about Malcolm X is that while he is known for his defense of violence, he is not known for actually having used violence. In fact, in his practice Malcolm X was generally no more violent than King. He was famously pictured holding an M1 carbine rifle, and openly criticized demands that black people refrain from fighting back even if attacked. But Malcolm did not stage armed uprisings; he spoke of self-defense against aggression and a willingness to use whatever means would actually secure a person’s rights and dignity. “I don’t mean go out and get violent,” he said, but rather exercising nonviolence on the condition that others remained nonviolent. “It doesn’t mean I advocate violence, but at the same time, I am not against using violence in self-defense.” The rifle-photograph actually illustrates Malcolm’s attitude well: in it, he stands looking out the window, gun at the ready. He is not prowling around seeking racists to kill, he is standing firm and protecting his rights and dignity.

If someone is going to advocate “self-defensive violence” or “violence if necessary to achieve one’s rights” it’s very important to make clear what would and would not constitute self-defense, and what “necessity” is. Malcolm X was an incredibly disciplined and thoughtful individual, and he made careful distinctions between violence as a specific narrow tool for achieving one’s liberty against another violent aggressor, and wanton, useless violence. One can even agree with everything Malcolm X says about the legitimacy of violence in self-defense and still believe that King’s strategy of nonviolence is the optimum way to achieve certain social objectives. Personally, this is where I come down: I do not feel comfortable telling someone who is physically attacked that they should not defend themselves, but I also think King is right that radical nonviolence (never forget the “radical” part) is usually the best way of winning the public to one’s cause, unless you are already in a situation where Hitler is about to take power and there is little left to do but fight.

If we’re going to endorse “self-defense,” though, it’s important to be clear on what that is. In Charlottesville, the New York Times’ Sheryl Gay Stolberg reported seeing “club-wielding ‘antifa’ beating white nationalists being led out of the park.” Is attacking people who are retreating a form of self-defense? The justification here is usually that fascist ideology is itself violence, meaning that it’s not necessary for a person who holds such an ideology to actually be the one to initiate physical force in order for violence to be “defensive.” But if we accept that, then simply walking up to a Trump supporter and stabbing them would seem to also be an act of self-defense, at which point… “self-defense” seems to mean something quite different from people’s ordinary understanding of it. Mark Bray (a white Ivy League professor!) says it is a “privileged” position to criticize “self-defense.” Fine. But have we thereby justified every single kind of aggressive act toward anyone on the right, or are there some we haven’t justified? The slippage, where once you’ve justified “any means necessary” in combating fascism, you have license to do anything to anyone that you’ve labeled a fascist, seems in part responsible for some of the more aggressive (and, in my mind, strategically unhelpful) acts by Antifa members.

Nicholas Christakis sums it up succinctly when he says:

Disagreement is not oppression. Argument is not assault. Words – even provocative or repugnant ones – are not violence. The answer to speech we do not like is more speech.

And without a doubt, there are much more effective ways of advancing worthy causes than simply going out into the street and administering beatings to anyone who opposes them. As Singal writes:

What proponents of disrupting racist gatherings often leave out is that there are alternatives that can help delegitimize white supremacists without falling into any of these potential traps, and without setting aside progressives’ normal ethical qualms about violence. For those instances in which a group of white supremacists really are just attempting to rally or to march, have their permits in order, and so on – meaning there’s no legal way for their opponents to prevent the event – Schanzer laid out a fairly straightforward alternative: Counterdemonstrators should respond assertively, vociferously, and in far superior numbers – but at a distance from the extremists themselves. This tactic both prevents the sort of violent conflict American hate groups want, and has the added benefit of drawing at least some media and social-media attention away from the smaller hateful gathering and toward the much larger counterprotest.

It also seems to be the preferred approach of a wide variety of experts and advocates in this area. “The main thing that [hate groups] seek is attention and publicity to disseminate a message of hate,” Robert Trestan, executive director of the Anti-Defamation League’s Boston office, told NPR’s “All Things Considered” during an interview about today’s planned “free speech” rally on Boston Common, which some are concerned will be a magnet for hate groups. “And so the best-case scenario is they come and they speak at the Common and there is nobody there to listen.” And Moises Velasquez-Manoff, a contributing op-ed writer at the Times, explained earlier this week that according to experts, “Violence directed at white nationalists only fuels their narrative of victimhood – of a hounded, soon-to-be-minority who can’t exercise their rights to free speech without getting pummeled.” “I would want to punch a Nazi in the nose, too,” Maria Stephan, a program director at the United States Institute of Peace, told him. “But there’s a difference between a therapeutic and strategic response.” Progressives would be eagerly echoing and retweeting this sort of logic if the wonks in question were talking about ISIS rather than the National Vanguard. Why should their insights suddenly be ignored?

If this line of thinking is correct, anyone disgusted by organized displays of explicit hatred should adopt a stance along the lines of this: “You know what? Let the Nazis rally. Let them try to promote a dying ideology the entire nation finds execrable. Down the road we are going to set up a big, inclusive show of solidarity that will be ten times larger. And anyone who is scared or intimidated or angry should come there, rather than risk their well-being facing down the dregs of society.” To be sure, this approach may not be as satisfying as punching Nazis, but it may increase the odds that in the future, there will be fewer Nazis to punch in the first place.

But perhaps the best reason to try to respond peacefully, whenever possible, is simply that violence is unpredictable and never easily contained (not even in the short term – [just ask the] two journalists who got attacked [by anti-Nazi demonstrators during the 2017 Charlottesville protests]). The risk that white-supremacist groups could get more and more radicalized and militant needs to be taken seriously, because however scary it was to see what happened in Charlottesville [in 2017], things can get much, much worse. And if things do get worse, plenty of the victims will be people who never asked to take this fight to the streets. In most other situations, progressives understand – or claim to understand – the moral gravity of calling for violence. They shouldn’t let a scary but small group of deeply loathed bigots steer them off course.

If experience has shown anything, it’s that nonviolence is simply the smarter approach (not to mention the more ethical one). As Pinker writes:

By the standards of history, a striking feature of the late-20th-century Rights Revolutions is how little violence they employed or even provoked. King himself was a martyr of the civil rights movement, as were the handful of victims of segregationist terrorism. But the urban riots that we associate with the 1960s were not a part of the civil rights movement and erupted after most of its milestones were in place. The other revolutions had hardly any violence at all: there was the nonlethal Stonewall riot, some terrorism from the fringes of the animal rights movement, and that’s about it. Their entrepreneurs wrote books, gave speeches, held marches, lobbied legislators, and gathered signatures for plebiscites. They had only to nudge a populace that had become receptive to an ethic based on the rights of individuals and were increasingly repelled by violence in any form. Compare this record to that of earlier movements which ended despotism, slavery, and colonial empires only after bloodbaths that killed people by the hundreds of thousands or millions.

[…]

King immediately appreciated that Gandhi’s theory of nonviolent resistance was not a moralistic affirmation of love, as nonviolence had been in the teachings of Jesus. Instead it was a set of hardheaded tactics to prevail over an adversary by outwitting him rather than trying to annihilate him. A taboo on violence, King inferred, prevents a movement from being corrupted by thugs and firebrands who are drawn to adventure and mayhem. It preserves morale and focus among followers when the movement suffers early defeats. By removing any pretext for legitimate retaliation by the enemy, it stays on the positive side of the moral ledger in the eyes of third parties, while luring the enemy onto the negative side. For the same reason, it divides the enemy, paring away supporters who find it increasingly uncomfortable to identify themselves with one-sided violence. All the while it can press its agenda by making a nuisance of itself with sit-ins, strikes, and demonstrations. The tactic obviously won’t work with all enemies, but it can work with some.

Karuna Mantena cites Reinhold Niebuhr to further explain why nonviolence is so effective:

Niebuhr was a political realist, perhaps the most influential realist of the 20th century. As such, he argued that political conflict was rooted in struggles of and over power. At the same time, political contestation generates and is exacerbated by resentments and egoistic sentiments. People take criticism of their political beliefs and position very personally, as insults and unjust accusations. Movements that question privilege will be met by indignation, animosity and resistance. Moreover, insurgent movements, movements that question the status quo, suffer a triple burden. They have to fight a better-resourced and entrenched opposition. They face the impassioned hostility that the contestation of privilege provokes. Protestors are readily branded as criminals, anarchists and inciters of violence.

Disciplined nonviolence can disrupt these natural presumptions and dynamics. Niebuhr saw that nonviolent protests, like all protests, involve coercion, intimidation and disruption. Protest in the form of economic boycott or a nonviolent march or demonstration will understandably be resented by those against whom it is aimed. But more neutral bystanders whom the protest incidentally disturbs and inconveniences might also respond with hostility and misunderstanding. The Occupy movements, for example, generated criticism as public nuisances.

Successful movements try to mitigate these negative consequences through the style and structure of nonviolent protest enacted. By ‘enduring more suffering than it causes’ (in Niebuhr’s words) nonviolence demonstrates goodwill towards the opposition. Its discipline displays a moral purpose beyond resentment and selfish ambition. Together, goodwill and the repression of personal resentment temper the passionate resistance of opponents. Ideally, this tempering can help to weaken the opposition’s entrenched commitments. More often, it has a salutary effect on potential allies of the movement, the neutral observers and the public at large. When protestors adopt discipline in their comportment and dress, this negates portrayals of them as criminal elements or enemies of public order. The jeering opposition is now exposed as irrational and uncivil in their response to the civility of the protestors. Disciplined, temperate protestors can divert and reduce hostilities to help the public to see beyond the inflamed situation to the underlying dispute.

Gandhi and King’s nonviolence required the repression of resentment and anger to garner the right political effect. Neither of them denied anger was a justified response to the experience of oppression, but they saw that it would not be, in Niebuhr’s terms, ‘morally and politically wise’ to make resentment the face of political action. Resentment, anger and indignation arouse opponents’ egoism and hostility, and tend to alienate bystanders. This was why, for Niebuhr, ‘the more the egoistic element can be purged from resentment, the purer a vehicle of justice it becomes’.

A movement’s success, in short, doesn’t just depend on how passionate or outraged its participants are. It depends on how well they’re able to channel that passion and outrage into constructive action. And just to add to this, “constructive action” doesn’t just mean street demonstrations. Even if a movement’s participants conduct themselves perfectly, street demonstrations alone can’t be expected to carry that movement to victory unless there’s a broader plan of action to follow them up. This is another major reason why the civil rights movement was so successful – its participants weren’t just venting their frustrations through big, theatrical gestures and then going home. They actually organized, planned, and carried out concrete steps toward achieving specific policy goals. Heller explains:

Why did civil-rights protest work where recent activism struggles?

[…]

“Modern networked movements can scale up quickly and take care of all sorts of logistical tasks without building any substantial organization cavity before the first protest or march,” [Zeynep Tufekci] writes. “However, with this speed comes weakness.”

Tufekci believes that digital-age protests are not simply faster, more responsive versions of their mid-century parents. They are fundamentally distinct. [At protests of the Erdoğan government] at Gezi Park [in Instanbul], she finds that nearly everything is accomplished by spontaneous tactical assemblies of random activists – the [L.A.] Kauffman model carried further through the ease of social media. “Preexisting organizations whether formal or informal played little role in the coordination,” she writes. “Instead, to take care of tasks, people hailed down volunteers in the park or called for them via hashtags on Twitter or WhatsApp messages.” She calls this style of off-the-cuff organizing “adhocracy.” Once, just getting people to show up required top-down coördination, but today anyone can gather crowds through tweets, and update, in seconds, thousands of strangers on the move.

At the same time, she finds, shifts in tactics are harder to arrange. Digital-age movements tend to be organizationally toothless, good at barking at power but bad at forcing ultimatums or chewing through complex negotiations. When the Gezi Park occupation intensified and the Turkish government expressed an interest in talking, it was unclear who, in the assembly of millions, could represent the protesters, and so the government selected its own negotiating partners. The protest diffused into disordered discussion groups, at which point riot police swarmed through to clear the park. The protests were over, they declared – and, by that time, they largely were.

The missing ingredients, Tufekci believes, are the structures and communication patterns that appear when a fixed group works together over time. That practice puts the oil in the well-oiled machine. It is what contemporary adhocracy appears to lack, and what projects such as the postwar civil-rights movement had in abundance. And it is why, she thinks, despite their limits in communication, these earlier protests often achieved more.

Tufekci describes weeks of careful planning behind the yearlong Montgomery bus boycott, in 1955. That spring, a black fifteen-year-old named Claudette Colvin refused to give up her seat on a bus and was arrested. Today, though, relatively few people have heard of Claudette Colvin. Why? Drawing on an account by Jo Ann Robinson, Tufekci tells of the Montgomery N.A.A.C.P.’s shrewd process of auditioning icons. “Each time after an arrest on the bus system, organizations in Montgomery discussed whether this was the case around which to launch a campaign,” she writes. “They decided to keep waiting until the right moment with the right person.” Eventually, they found their star: an upstanding, middle-aged movement stalwart who could withstand a barrage of media scrutiny. This was Rosa Parks.

On Thursday, December 1st, eight months after Colvin’s refusal to give up her seat, Parks was arrested. That night, Robinson, a professor at Alabama State College, typed a boycott announcement three times on a single sheet of paper and began running it through the school’s mimeograph machine, for distribution through a local network of black social organizations. The boycott, set to begin on Monday morning, was meant to last a single day. But so many joined that the organizers decided to extend it – which necessitated a three-hundred-and-twenty-five-vehicle carpool network to get busless protesters to work. Through such scrupulous engineering, the boycott continued for three hundred and eighty-one days. Parks became a focal point for national media coverage, while Colvin and four other women were made plaintiffs in Browder v. Gayle, the case that, rising to the Supreme Court, got bus segregation declared unconstitutional.

What is striking about the bus boycott is not so much its passion, which is easy to relate to, as its restraint, which – at this moment, especially – is not. No outraged Facebook posts spread the news when Colvin was arrested. Local organizers bided their time, slowly planning, structuring, and casting what amounted to a work of public theatre, and then built new structures as their plans changed. The protest was expressive in the most confected sense, a masterpiece of control and logistics. It was strategic, with the tactics following. And that made all the difference in the world.

Tufekci suggests that, among that era’s successes, deliberateness of this kind was a rule. She points out how, in preparation for the March on Washington, in 1963, a master plan extended even to the condiments on the sandwiches distributed to marchers. (They had no mayonnaise; organizers worried that the spread might spoil in the August heat.) And she focusses on the role of the activist leader Bayard Rustin, who was fixated on the audio equipment that would be used to amplify the day’s speeches. Rustin insisted on paying lavishly for an unusually high-quality setup. Making every word audible to all of the quarter-million marchers on the Mall, he was convinced, would elevate the event from mere protest to national drama. He was right.

Before the march, Martin Luther King, Jr., had delivered variations on his “I Have a Dream” speech twice in public. He had given a longer version to a group of two thousand people in North Carolina. And he had presented a second variation, earlier in the summer, before a vast crowd of a hundred thousand at a march in Detroit. The reason we remember only the Washington, D.C., version, Tufekci argues, has to do with the strategic vision and attentive detail work of people like Rustin. Framed by the Lincoln Memorial, amplified by a fancy sound system, delivered before a thousand-person press bay with good camera sight lines, King’s performance came across as something more than what it had been in Detroit – it was the announcement of a shift in national mood, the fulcrum of a movement’s story line and power. It became, in other words, the rarest of protest performances: the kind through which American history can change.

Tufekci’s conclusions about the civil-rights movement are unsettling because of what they imply. People such as Kauffman portray direct democracy as a scrappy, passionate enterprise: the underrepresented, the oppressed, and the dissatisfied get together and, strengthened by numbers, force change. Tufekci suggests that the movements that succeed are actually proto-institutional: highly organized; strategically flexible, due to sinewy management structures; and chummy with the sorts of people we now call élites.

The dry technical work of formulating policies, identifying the people whose views it’s most important to change (this part is crucial), and coordinating the necessary logistics to reach those people may not be as satisfying as spontaneously taking to the streets and busting some heads – at least not to some activists – but it is generally more effective. Wong puts it this way:

You need to ask yourself one crucial question: Are you in it for the cause, or are you in it for the fight? There’s an easy way to tell: Do you get involved with the boring parts?

Donald Trump’s entire agenda could be obliterated a little more than a year from now with a new congress, but statistically the vast majority of you won’t vote at all (and I’d say the vast majority who show up to anti-Nazi rallies also won’t cast a vote). Smacking Nazis with clubs is fun. Voting in midterms is not. Only one results in real change. Hell, in the 2016 election that supposedly determined the future of humanity “Did Not Vote” won 44 of 50 states. Why are some of you willing to put yourself in physical danger at a protest but won’t suffer the tedium of real-world policy change? Deep down inside, you know the answer.

“But voting doesn’t change anything!” Okay, the outcome of exactly one senate race just prevented Obamacare from being repealed. Twenty million people will have health insurance next year because just a small group of voters — enough to fit in a stadium — showed up instead of staying home. You think Hillary would be talking about repealing DACA? “Sometimes violence is the only way!” Are you saying that based on evidence, or because you want it to be true? For every nationalist/authoritarian movement that got turned back by war, literally thousands quietly died due to losing elections or just failing to drum up popular support. How many elections has David Duke won?

Like it or not, the battle of ideas isn’t a battle that can be won through brute force. (In fact, just to drive the point home, even actual wars – the kind with tanks and bullets – are becoming harder to win through brute force; rather than crushing their enemies through violence alone, modern military forces are increasingly being forced to wage battles for “hearts and minds” instead, lest they radicalize populations against them.) If you want people to take your ideas seriously, then smashing windows and picking fights isn’t how you convince them to join you in your cause. You have to engage constructively.

And this doesn’t just mean rejecting physical violence, by the way – it means rejecting rhetorical belligerence too. Getting in your opponents’ faces and spouting off buzzword insults is a great way to discredit yourself in their eyes (and in the eyes of neutral observers) and sabotage any possibility of constructive engagement – so if that’s your goal, then by all means mock your opponents and misrepresent their views all you want. As David Christopher Bell writes:

Hey conservatives, want to make liberals stop taking you seriously? Call them a “cuck” or “libtard” or “snowflake.” Heeeey liberals, want to make conservatives stop taking you seriously? Call them a “basement dweller” or a “fascist” or a “deplorable.” Call the president “Cheeto-face” or “Drumpf.” It’s a surefire way to make your opponent not only unaffected by your words, but also think of you as being another cultist reading off a tired script of go-to phrases and insults.

But if you actually want to bring people around to your way of thinking, you need to demonstrate that your ideology is a thoughtful and well-reasoned one. A lot of activists seem to have this idea that the best way to get their opponents to stop ignoring them and start taking them seriously is to just be louder and more aggressive and more in-their-face. But of course, as Friedersdorf notes, this is in fact the best way to make people want to keep ignoring you and rejecting your ideas:

People are never less likely to change, to convert to new ways of thinking or acting, than when it means joining the ranks of their denouncers.

It’s understandable that tempers might be running high and patience might be running low given how contentious a lot of these issues are, and especially given how much is often at stake. If what the other side is doing is especially egregious, you might find it almost impossible to treat them with anything other than hatred or contempt. These are matters of life and death hanging in the balance, after all – can you really be expected to act as polite and civil as if this were just some academic debate club? But the key point here is that it’s possible to be both completely outraged about something and strategically judicious in your approach to resolving it – and it’s precisely because the ideas you’re arguing are so important that it’s so crucial to promote them in a way that will actually change people’s minds rather than polarizing them against you. The most successful political dissidents in history have embodied precisely this approach; when Martin Luther King stood up in front of the entire country to give his legendary speech, he didn’t just start spewing vitriol into the microphone (and you can imagine how well his arguments would have been received if he had just gotten up there and started screaming “HEY MR. SO-CALLED PRESIDENT FUCK YOU FOR NOT ACKNOWLEDGING OUR RIGHTS YOU FASCIST PIECE OF SHIT,” etc.). Rather, when the spotlight was on him, he realized the importance of making his case – and by doing so in a way that was serious and compelling, he turned the nation to his side and won the day.

Here’s T1J on the subject:

For some people, politics seems to be a game of good versus evil. So the concept of common ground with political opponents seems unthinkable. And some people are astonished by the mere suggestion that they should have to ever defend their positions.

[…]

[These people] often have this smug incredulity where it’s like, “You should already know better, I shouldn’t have to explain it to you” – and I’ve even been guilty of that, and maybe it’s even true sometimes. But it’s damn sure not convincing. Neither are insults or social media callouts. But you know what is? Arguments. If you’re good at making them.

Unfortunately, not everyone has the patience for this kind of intellectual maneuvering. In some activist circles, even raising the possibility that reasoned dialogue might be more productive than angry outbursts can get you accused of “tone policing.” (If you’re not familiar with this term, the strongest case for it I’ve seen is this comic strip.) But pointing out that angrily shouting at people is less likely to get a positive response than calmly talking with them isn’t the same as (say) a police officer giving someone a ticket for speeding; it’s more like a friend warning them that driving too fast can attract speeding tickets. The latter isn’t an act of “policing;” it’s a helpful heads-up about which actions will lead to which consequences. That is, it isn’t a prescriptive statement – saying that you have to conduct yourself in a certain way – it’s a descriptive one – saying that if you conduct yourself in a certain way, then it’ll be more likely to get a negative response than if you conduct yourself in another way. Accordingly, if your goal in expressing your opinions is just to let off steam or commiserate with people who feel the same way you do (even if it means potentially alienating people who don’t belong to your in-group), then by all means vent your frustrations freely and don’t worry about how your tone might be received. But if your goal is to actually persuade the people who don’t already agree with you (as it seems like it should be if you really believe in your ideas), then it’s just an unavoidable fact that blasting them with outrage will probably do more to undermine your cause than help it. This doesn’t mean that you should completely ignore the question of whether an issue affects you emotionally; if it’s central to the issue being discussed, then of course you should include that fact in your discussions. But trying to change people’s minds with emotional outbursts as a substitute for well-reasoned arguments is a recipe for failure. Sure, strategic self-restraint can be hard; you may have to grit your teeth and force yourself to maintain composure even if you’re seething inside. But again, it’s just a matter of what your priorities are – are you more interested in being able to cathartically vent your outrage and signal that outrage to your peers, or are you more interested in making legitimate progress on this issue you feel so strongly about? To put it more bluntly, is it about you, or is it about the cause?

It’s hard to take someone seriously when they appear to prioritize their own outrage above the cause they’re ostensibly outraged about. From your critics’ perspective, it proves that your activism is simply a pretense – that you’d rather continue being a victim (so you can keep being outraged about it) than produce real change in the world so that you’re not a victim anymore (and don’t have to keep feeling outraged). It gives the impression that you’re not really a mature adult committed to helping your cause by whatever means will be most effective (even if it means, God forbid, strategically keeping your own personal emotions in check at times), but rather that you’re just a spoiled child whose only concern is being able to give the world a piece of your mind whenever you want, regardless of the consequences. This may be an unfair judgment, sure – but when you persistently refuse to “calm down and have an adult conversation,” it’s the impression you give, whether you want to or not.

The alternative approach – making a conscious effort to be reasonable – doesn’t mean that you have to water down your beliefs or compromise your convictions just for the sake of “playing nice.” It’s perfectly possible to advocate even the most radical beliefs while still maintaining your poise and dignity. Choosing to remain civil just means recognizing that there are better ways of winning people to your beliefs than simply demanding that they do so. The way to get respect is by displaying respectability.

And for what it’s worth, there are also good reasons to choose civil discourse over combative rhetoric just in terms of your own personal self-interest. Constantly maintaining a hostile, trip-wire mentality – constantly reacting to things and finding new reasons to get outraged – can be exhausting. It can lead to perpetual bitterness and, in time, to total burnout. If you’re able to maintain your composure, on the other hand, it can do a lot more to preserve your mental sharpness and ensure your long-term durability. So even if all you care about is your own peace of mind, keeping an even keel is usually the best way to achieve that.

As Megan McArdle adds, it’s also important to maintain civility simply in order to coexist with the other parts of your society which are going to continue to exist whether you like it or not. A lot of activists seem intent on treating life as if it were a video game, in which you could defeat your enemies in a way that would cause them to simply disappear – but as she explains in a podcast interview, the reality is the exact opposite:

I wrote a column about the fact that America is like a marriage. It’s like a marriage in a country with no divorce. You cannot win a marriage. You can only win something that ends before you do. And so, you can’t just beat the other 50% of the population. They’re here. You’ve got to figure out a way to live with them. And if we want, we can have a bad marriage. There were lots of them around before divorce was legal; there are still some around now. We can have a terrible marriage where we scream at each other and we’re bitter, we say nasty things to each other all the time. But you don’t win that. You lose that. Because now you’re in a miserable marriage. And the other person has just as much power to hurt you as you have to hurt them. And that is, I think, in a lot of ways the lesson of Trump – and you can also say the lesson of gay marriage, where social conservatives turned around and said, “Why is everyone beating me up?” and it’s like, “Well, these people felt like you were beating them up for a long time, that’s why.”

We have to recognize that the other population is not going away. And that if you want to live with them without them constantly hurting you, you have to not look to constantly hurt them.

Eboo Patel agrees:

When did they teach you that diversity is […] just the differences you like? It’s not all samosas and egg rolls. Diversity is about disagreements. There’s a great line: “Diversity is not rocket science; it’s harder.” Because if you’re engaging people with whom you have differences that you don’t like, [with whom] you have disagreements, you’ve got to figure out how you’re going to engage those people. Does the fact of that disagreement – voting differently in a particular election, disagreeing on fundamental issues, immigration policy for example, abortion – does that disagreement cancel any chance of a relationship? If it does, we don’t have a civil society anymore. How do you have PTAs, or a little league, or hospitals? That’s what a diverse civil society is about: the ability to disagree on some fundamental things, and still work together on other fundamental things. That doesn’t mean that you bracket your disagreement forever. Part of the beauty of working together on other fundamental things is the ability to build a relationship on something that matters, such that you might be able to broach that disagreement with a different tone. But if we allow some disagreements to cancel any possibility of a relationship, we’re in real trouble as a society.

Now of course there are limits. I am happy to engage just about everybody in the United States of America in a conversation, or to be part of an athletic league with them, or to be on the PTA with them, but I’m not buying a brownie from the KKK bake sale. There are limits. But I think that in a diverse civil society, when we recognize that diversity is not just the differences we like, those limits are not “the person who voted differently from you in the last election;” those limits are “the true barbarians.” And the way the great political philosopher defined the term “barbarian” is: the barbarian is the person who destroys the conversation. Civilization means people from different backgrounds living together and talking together. The barbarian is the person who destroys the conversation. I think that person is beyond the circle of civil discourse; anybody else, I’m engaging with.

Aside from these purely pragmatic and self-interested reasons, though, Patel’s last point also raises the most important reason of all to treat your ideological opponents with civility – which is simply that, believe it or not, most of them aren’t actually the over-the-top caricatures of evil that your side so often wants to paint them as. Your opponents might dispute some of your ideas – their own ideas might even be horribly, disastrously wrong – but that doesn’t automatically mean that each and every one of them is therefore some kind of fiendish mustache-twirling villain whose sole terminal value is to inflict maximum suffering on others and make the world a worse place. In 99.9% of cases, people genuinely believe that they’re the “good guys” acting in the cause of righteousness. And the fact that they may be misguided in this belief, or that they may have been exposed to inaccurate information that has led them to draw the wrong conclusions, doesn’t mean that they’re stupid or evil. It just means that they’re wrong. As Peter Boghossian and James Lindsay write:

Differences of opinion, even moral opinions, are not necessarily moral failings. People hold moral beliefs for a range of reasons, from culture to personal experience to ignorance. If someone reasons her way to a false moral view, this doesn’t make her a bad person. It just means her reasoning was in error.

People can get things wrong sometimes – even horribly, disastrously wrong – while still believing that their ideas and actions are based on well-founded principles and that they’re on the right side of the moral argument. And if you just say they’re stupid or evil and that’s the end of your explanation, then as Steven Zuber puts it, you’re probably not thinking hard enough.

Yudkowsky talks about this in depth:

Are your enemies innately evil?

[…]

As previously discussed, we see far too direct a correspondence between others’ actions and their inherent dispositions. We see unusual dispositions that exactly match the unusual behavior, rather than asking after real situations or imagined situations that could explain the behavior. We hypothesize mutants.

When someone actually offends us – commits an action of which we (rightly or wrongly) disapprove – then, I observe, the correspondence bias redoubles. There seems to be a very strong tendency to blame evil deeds on the Enemy’s mutant, evil disposition. Not as a moral point, but as a strict question of prior probability, we should ask what the Enemy might believe about their situation which would reduce the seeming bizarrity of their behavior. This would allow us to hypothesize a less exceptional disposition, and thereby shoulder a lesser burden of improbability.

On September 11th, 2001, nineteen Muslim males hijacked four jet airliners in a deliberately suicidal effort to hurt the United States of America. Now why do you suppose they might have done that? Because they saw the USA as a beacon of freedom to the world, but were born with a mutant disposition that made them hate freedom?

Realistically, most people don’t construct their life stories with themselves as the villains. Everyone is the hero of their own story. The Enemy’s story, as seen by the Enemy, is not going to make the Enemy look bad. If you try to construe motivations that would make the Enemy look bad, you’ll end up flat wrong about what actually goes on in the Enemy’s mind.

But politics is the mind-killer. Debate is war; arguments are soldiers. Once you know which side you’re on, you must support all arguments of that side, and attack all arguments that appear to favor the opposing side; otherwise it’s like stabbing your soldiers in the back.

If the Enemy did have an evil disposition, that would be an argument in favor of your side. And any argument that favors your side must be supported, no matter how silly – otherwise you’re letting up the pressure somewhere on the battlefront. Everyone strives to outshine their neighbor in patriotic denunciation, and no one dares to contradict. Soon the Enemy has horns, bat wings, flaming breath, and fangs that drip corrosive venom. If you deny any aspect of this on merely factual grounds, you are arguing the Enemy’s side; you are a traitor. Very few people will understand that you aren’t defending the Enemy, just defending the truth.

If it took a mutant to do monstrous things, the history of the human species would look very different. Mutants would be rare.

Or maybe the fear is that understanding will lead to forgiveness. It’s easier to shoot down evil mutants. It is a more inspiring battle cry to scream, “Die, vicious scum!” instead of “Die, people who could have been just like me but grew up in a different environment!” You might feel guilty killing people who weren’t pure darkness.

This looks to me like the deep-seated yearning for a one-sided policy debate in which the best policy has no drawbacks. If an army is crossing the border or a lunatic is coming at you with a knife, the policy alternatives are (a) defend yourself (b) lie down and die. If you defend yourself, you may have to kill. If you kill someone who could, in another world, have been your friend, that is a tragedy. And it is a tragedy. The other option, lying down and dying, is also a tragedy. Why must there be a non-tragic option? Who says that the best policy available must have no downside? If someone has to die, it may as well be the initiator of force, to discourage future violence and thereby minimize the total sum of death.

If the Enemy has an average disposition, and is acting from beliefs about their situation that would make violence a typically human response, then that doesn’t mean their beliefs are factually accurate. It doesn’t mean they’re justified. It means you’ll have to shoot down someone who is the hero of their own story, and in their novel the protagonist will die on page 80. That is a tragedy, but it is better than the alternative tragedy. It is the choice that every police officer [is tasked with potentially having to make], every day, to keep our neat little worlds from dissolving into chaos.

When you accurately estimate the Enemy’s psychology – when you know what is really in the Enemy’s mind – that knowledge won’t feel like landing a delicious punch on the opposing side. It won’t give you a warm feeling of righteous indignation. It won’t make you feel good about yourself. If your estimate makes you feel unbearably sad, you may be seeing the world as it really is. More rarely, an accurate estimate may send shivers of serious horror down your spine, as when dealing with true psychopaths, or neurologically intact people with beliefs that have utterly destroyed their sanity (Scientologists or Jesus Camp).

So let’s come right out and say it – the 9/11 hijackers weren’t evil mutants. They did not hate freedom. They, too, were the heroes of their own stories, and they died for what they believed was right – truth, justice, and the Islamic way. If the hijackers saw themselves that way, it doesn’t mean their beliefs were true. If the hijackers saw themselves that way, it doesn’t mean that we have to agree that what they did was justified. If the hijackers saw themselves that way, it doesn’t mean that the passengers of United Flight 93 should have stood aside and let it happen. It does mean that in another world, if they had been raised in a different environment, those hijackers might have been police officers. And that is indeed a tragedy. Welcome to Earth.

The fact is, even the most egregious wrongdoers have reasons for believing what they believe. These reasons may not always be very good ones, but the fact that they exist in the first place means that there’s something there that can be addressed and potentially refuted. People’s minds can be changed if you use the right approach – but that’s only possible if you acknowledge that your opponents really do have beliefs that they sincerely think are right, and that they hold these beliefs honestly and are therefore capable of changing them. If your criticism is focused on these ideas, not on the people themselves – if your aim is to correct bad ideas rather than to punish the people who hold them – then you actually can move the needle, even if only little by little. But if you insist on treating your opponents as pure mindless forces of evil, which can never be persuaded to change their minds and therefore can only be destroyed, then by definition you’re making it impossible for the dispute between you to ever reach a peaceful resolution. As Wong puts it in another podcast discussion:

Even if it is a snarling person at a rally screaming racist slogans – that person’s got a story; they got to that place in their life somehow. People don’t pop out of the womb with a swastika on their neck; they make a series of choices to get there. And we know from history that there are factors in society that create these movements, and you can do things to combat them – but you have to acknowledge that it’s a real thing; you can’t just be dismissive of it, and just be very snarky and sarcastic and just endlessly make fun of these people as being ignorant or hateful or whatever, because we now have evidence, that’s not a winning technique.

[…]

[Even the most radical extremists] can be convinced [to change their views]. But they are not shamed out of their positions; that simply doesn’t work. And what the Daily Show does, and what John Oliver does, and what now Samantha Bee does, and many other outlets on YouTube or whatever, where you just mock those people and call them monsters and you roll your eyes, and it’s like “How could anybody possibly believe these things or support these things?” – that doesn’t work. That doesn’t bring them around.

If you’re actually interested in trying to turn your enemies into allies – and not just denouncing them to show how passionate and committed you are to your cause – then you’ve got to do more than just demand that they join your side and then mock them if they don’t. You’ve got to actually bring good arguments to the table and lay those arguments out in a persuasive way. As David Pizarro puts it:

If you are right [in your beliefs], then there ought to be reasons you are right. [You should] register those reasons, and make them known, [but just] shouting down [your opponents] really misses the whole point of being able to exercise your ability to have free thought. [When] there’s no well-reasoned arguments [and instead] you have protest by shouting, then I feel like you lose.

To return to our point from before: In the same way that violence is a symmetric weapon – i.e. one that helps the more powerful side to win regardless of who’s actually right – so too are tactics like trying to silence or punish your opponents rather than persuading them. As Alexander writes:

A good response to an argument is one that addresses an idea; a bad argument is one that silences it. If you try to address an idea, your success depends on how good the idea is; if you try to silence it, your success depends on how powerful you are and how many pitchforks and torches you can provide on short notice.

Shooting bullets is a good way to silence an idea without addressing it. So is firing stones from catapults, or slicing people open with swords, or gathering a pitchfork-wielding mob.

But trying to get someone fired for holding an idea is also a way of silencing an idea without addressing it.

[…]

A lot of people would argue that [using tactics like this] holds people “accountable” for what they say online. But like most methods of silencing speech, its ability to punish people for saying the wrong things is entirely uncorrelated with whether the thing they said is actually wrong. It distributes power based on who controls the largest mob (hint: popular people) and who has the resources, job security, and physical security necessary to outlast a personal attack (hint: rich people). If you try to hold the Koch Brothers “accountable” for muddying the climate change waters, they will laugh in your face. If you try to hold closeted gay people “accountable” for promoting gay rights, it will be very easy and you will successfully ruin their lives. Do you really want to promote a policy that works this way?

He continues:

People […] have to understand that the correct response to “idea I disagree with” is “counterargument”, not “find some way to punish or financially ruin the person who expresses it.” If you respond with counterargument, then there’s a debate and eventually the people with better ideas win (as is very clearly happening right now with gay marriage). If there’s a norm of trying to punish the people with opposing views, then it doesn’t really matter whether you’re doing it with threats of political oppression, of financial ruin, or of social ostracism, the end result is the same – the group with the most money and popularity wins, any disagreeing ideas never get expressed.

If you want your opponents to stop ignoring your ideas and start taking them seriously, then, you shouldn’t just keep demanding that they do at ever-increasing levels of volume. You should try actually sitting down with them and having a conversation. This is the kind of approach that people respond to – not in every case, of course, but certainly more often than with the “screaming in their face” approach. John Cheese gives the example of his own experience:

I used to be a steadfast conservative. I’m talking the “abortion is wrong and gay people are a plague” type. I’m not that way anymore, but I didn’t get there on my own. I talked to people who didn’t blow me off as a raging lunatic. They didn’t scream at me or throw handfuls of shit. They explained why they thought I was wrong, and guess what? Over time, I found that I actually agreed with them. If I had been met with nothing but hatred and insults, my only reaction would have been “Go fuck half of yourself. Then take a break. Then come back and fuck the other half of yourself.”

Stories like this aren’t particularly uncommon. Even the most radical extremists – let’s take the example of Klansmen again – can be reasoned out of their positions with the right combination of patience, reason, and understanding. Carl Sagan recounts one such instance:

When permitted to listen to alternative opinions and engage in substantive debate, people have been known to change their minds. It can happen. For example, Hugo Black, in his youth, was a member of the Ku Klux Klan; he later became a Supreme Court justice and was one of the leaders in the historic Supreme Court decisions, partly based on the 14th Amendment to the Constitution, that affirmed the civil rights of all Americans: It was said that when he was a young man, he dressed up in white robes and scared black folks; when he got older, he dressed up in black robes and scared white folks.

There’s actually a whole genre of stories like this, where Klansmen or neo-Nazis eventually change their views and become advocates for equal rights. Reading through their accounts, you see the same basic story over and over again – and it’s never “Well, I was confronted by an angry mob that called me a fascist piece of shit and screamed slogans at me until I said, gosh, you know, I am in fact a fascist piece of shit and had better adopt these people’s views instead because they really seem like they’ve got a lot of wisdom to share.” Rather, the way these success stories always go is that the Klansman or neo-Nazi sits down and has an earnest, respectful discussion with someone whose views differ from their own, and it gets them thinking about and reexamining some of the assumptions they’d always taken for granted. They start to realize that – lo and behold – some members of their out-group actually might be capable of reason and intelligence and kindness after all, and they aren’t just a bunch of crazed animals. In fact, the more they interact with people from their out-group, the more they start to realize that maybe these people are actually a lot more normal and decent that they’d thought. They gradually become a little more receptive to the out-group’s perspective each time they sit down with them from then on; and slowly their own views shift until they ultimately come into a more harmonious alignment with the other side.

One of the masters of shifting people’s views in this way is Daryl Davis, a black musician who has personally convinced dozens of people to leave the KKK by befriending them and engaging them in patient, civil conversations. He explains his approach in a Q&A:

People make the mistake of forming anti-racist groups that are rendered ineffective from the start because [they] ONLY invite those who share their beliefs to their meetings. [Here’s a better approach:]

  • Provide a safe neutral meeting place.
  • Learn as much as you can about the ideology of a racist or perceived racist in your area.
  • Invite that person to meet with your group.
  • VERY IMPORTANT — LISTEN to that person. What is his/her primary concern? Place yourself in their shoes. What would you do to address their concern if it were you?
  • [Ask] questions, but keep calm in the face of their loud, boisterous posture if that is on display, don’t combat it with the same.
  • While you are actively learning about someone else, realize that you are passively teaching them about yourself. Be honest and respectful to them, regardless of how offensive you may find them. You can let them know your disagreement but not in an offensive manner.
  • Don’t be afraid to invite someone with a different opinion to your table. If everyone in your group agrees with one another and you shun those who don’t agree, how will anything ever change? You are doing nothing more than preaching to the choir.
  • When two enemies are talking, they are not fighting, they are talking. They may be yelling and screaming and pounding their fist on the table in disagreement to drive home their point, but at least they are talking. It is when the talking ceases, that the ground becomes fertile for violence. So, KEEP THE CONVERSATION GOING.

To those who refuse to engage with their opponents’ worldview on the basis that “their ideology is one that can’t be reasoned with,” Davis’s example provides a good counterargument. Even if your opponents seem too extreme and absolutist to be reasoned with, it may still be possible to influence them; you just need to step up your game.

His advice to “keep the conversation going” also illustrates another important point – that even when your conversations do succeed in having an influence on someone (whether it be an extremist like a Klansman or just a more everyday person who happens to hold differing political or religious views), you can’t expect that it will always be a matter of winning them over on the spot and instantly converting them to your worldview. As Beck writes:

When someone does change their mind, it will probably be more like the slow creep of [Daniel] Shaw’s disillusionment with his [meditation] guru [who, after years of spiritual lessons, started to seem less and less impressive over time]. He left “the way most people do: Sort of like death by a thousand cuts.”

Hardly anyone ever changes their mind in real time, in the middle of a debate. You shouldn’t ever expect to see someone suddenly stop mid-sentence and say, “You know what? I just realized that your argument is totally airtight. My worldview is completely wrong!” Instead, what usually happens is that the process of reexamining arguments, reconsidering beliefs, and adjusting worldviews only takes place over the course of weeks, or months, or even years. That’s why, if you find yourself in the middle of a debate, your focus shouldn’t necessarily be on trying to “hit the home run ball” and force your opponent to change their entire worldview on the spot. You’ll probably have more luck if you just gently plant the seeds of your ideas in the other person’s mind and let them germinate there for a while. As commenter Gneissisnice puts it:

An argument isn’t about changing someone’s mind. You’re almost never going to get someone to admit that they’re wrong. [Your] goal is make them understand your position. You want them to say “Huh, I see where you’re coming from. I disagree, but I understand why you think that.”

In most debates, the most productive approach isn’t even to have each side try to “win” at all – at least, not at first. The best place to start is just to dig down into exactly what each side believes and why – asking clarifying questions along the way and trying to build bridges of understanding wherever possible – in order to get to the root of where exactly the disagreements lie and what the fundamental points of contention really are. This is the most effective way of avoiding the classic debate fiasco where both sides end up talking past one another and arguing completely separate points. But it also creates openings for each side to take new information on board that they might not have been giving any consideration to before. If they don’t feel like they’re required to win the argument, they might be more receptive to opposing ideas. Alexander talks about how this gradual “chipping away” process works, as well as some of the other benefits of constructive dialogue:

What’s the point? If you’re just going to end up at the high-level generators of disagreement, why do all the work?

First, because if you do it right you’ll end up respecting the other person. Going through all the motions might not produce agreement, but it should produce the feeling that the other person came to their belief honestly, isn’t just stupid and evil, and can be reasoned with on other subjects. The natural tendency is to assume that people on the other side just don’t know (or deliberately avoid knowing) the facts, or are using weird perverse rules of reasoning to ensure they get the conclusions they want. Go through the whole process, and you will find some ignorance, and you will find some bias, but they’ll probably be on both sides, and the exact way they work might surprise you.

Second, because – and this is total conjecture – this deals a tiny bit of damage to the high-level generators of disagreement. I think of these as Bayesian priors; you’ve looked at a hundred cases, all of them have been X, so when you see something that looks like not-X, you can assume you’re wrong – see the example […] where the libertarian admits there is no clear argument against this particular regulation, but is wary enough of regulations to suspect there’s something they’re missing. But in this kind of math, the prior shifts the perception of the evidence, but the evidence also shifts the perception of the prior.

Imagine that, throughout your life, you’ve learned that UFO stories are fakes and hoaxes. Some friend of yours sees a UFO, and you assume (based on your priors) that it’s probably fake. They try to convince you. They show you the spot in their backyard where it landed and singed the grass. They show you the mysterious metal object they took as a souvenir. It seems plausible, but you still have too much of a prior on UFOs being fake, and so you assume they made it up.

Now imagine another friend has the same experience, and also shows you good evidence. And you hear about someone the next town over who says the same thing. After ten or twenty of these, maybe you start wondering if there’s something to all of this UFOs. Your overall skepticism of UFOs has made you dismiss each particular story, but each story has also dealt a little damage to your overall skepticism.

I think the high-level generators might work the same way. The libertarian says “Everything I’ve learned thus far makes me think government regulations fail.” You demonstrate what looks like a successful government regulation. The libertarian doubts, but also becomes slightly more receptive to the possibility of those regulations occasionally being useful. Do this a hundred times, and they might be more willing to accept regulations in general.

As the old saying goes, “First they ignore you, then they laugh at you, then they fight you, then they fight you half-heartedly, then they’re neutral, then they then they grudgingly say you might have a point even though you’re annoying, then they say on balance you’re mostly right although you ignore some of the most important facets of the issue, then you win.”

Another important discovery that psychologists have recently uncovered is that, in most casual debates, the best way to make someone more receptive to opposing views isn’t even to pressure them with counterarguments at all – it’s often more effective just to ask the right questions and let them discover the weaknesses in their own arguments for themselves. As McRaney explains:

Research by psychologist Steven Sloman and marketing expert Phil Fernbach shows that people who claim to understand complicated political topics such as cap and trade and flat taxes tend to reveal their ignorance when asked to provide a detailed explanation without the aid of Google. Though people on either side of an issue may believe they know their opponents’ positions, when put to the task of breaking it down they soon learn that they have only a basic understanding of the topic being argued. Stranger still, once subjects in such studies recognize this, they reliably become more moderate in their beliefs. Zealotry wanes; fanatical opposition is dampened. The research suggests simply working to better explain your own opinion saps your fervor. Yet the same research shows the opposite effect when subjects are asked to justify their positions on a contentious issue. Justification strengthens a worldview, but exploration weakens it.

Boghossian and Lindsay break down how this phenomenon works:

A philosopher and a psychologist, Robert A. Wilson and Frank Keil, have researched the phenomenon of ignorance of one’s ignorance. In a 1998 paper titled “The Shadows and Shallows of Explanation,” they studied the well-known phenomenon of people who believe they understand how things work better than they actually do. They discovered our tendency to believe we’re more knowledgeable than we are because we believe in other people’s expertise. Think about this like borrowing books from the great library of human knowledge and then never reading the books. We think we possess the information in the books because we have access to them, but we don’t have the knowledge because we’ve never read the books, much less studied them in depth. Following this analogy, we’ll call this fallacy the “Unread Library Effect.”

The Unread Library Effect was revealed in an experiment by two researchers in 2001, Frank Keil (again) and Leonid Rozenblit; they called it “the illusion of explanatory depth” and referred to it as “the misunderstood limits of folk science.” They researched people’s understanding of the inner workings of toilets. Subjects were asked to numerically rate how confident they were in their explanation of how a toilet works. The subjects were then asked to explain verbally how a toilet works, giving as much detail as possible. After attempting an explanation, they were asked to numerically rate their confidence again. This time, however, they admitted being far less confident. They realized their own reliance on borrowed knowledge and thus their own ignorance.

In 2013, cognitive scientists Steven Sloman and Philip Fernbach, with behavioral scientist Todd Rogers and cognitive psychologist Craig Fox, performed an experiment showing that the Unread Library Effect also applies to political beliefs. That is, helping people understand they’re relying upon borrowed knowledge leads them to introduce doubt for themselves and thus has a moderating effect on people’s beliefs. By having participants explain policies in as much detail as possible, along with how those policies would be implemented and what impacts they might have, the researchers successfully nudged strong political views toward moderation. Taking advantage of this phenomenon, then, confers at least two significant benefits in an intervention. First, it allows your conversational partner to do most of the talking, which affords you the opportunity to listen and prevents them from feeling as though you’re trying to change their mind. Second, they lead themselves into doubt rather than feeling pressured by someone else.

Modeling ignorance is an effective way to help expose the Unread Library Effect because, as the name implies, the Unread Library Effect relies upon information about which your conversation partner is ignorant – even though she doesn’t realize it. In essence, you want her to recognize the limits of her knowledge. Specifically, then, you should model behavior highlighting the limits of your own knowledge. This has three significant merits:

  1. It creates an opportunity for you to overcome the Unread Library Effect, that is, thinking you know more about an issue than you do.
  2. It contributes to a climate of making it okay to say “I don’t know,” and thus gives tacit permission to your partner to admit that she doesn’t know.
  3. It’s a subtle but effective strategy for exposing the gap between your conversation partner’s perceived knowledge and her actual knowledge.

Here are some examples of how you can apply this in conversations. You can say, “I don’t know how the details of using mass deportations of illegal immigrants would play out. I think there are likely both pros and cons, and I really don’t know which outweigh which. How would that policy be implemented? Who pays for it? How much would it cost? What does that look like in practice? Again, I don’t know enough specifics to have a strong opinion, but I’m happy to listen to the details.” When you do this, don’t be shy. Explicitly invite explanations, ask for specifics, follow up with pointed questions that revolve around soliciting how someone knows the details, and continue to openly admit your own ignorance. In many conversations, the more ignorance you admit, the more readily your partner in the conversation will step in with an explanation to help you understand. And the more they attempt to explain, the more likely they are to realize the limits of their own knowledge.

In this example, if your partner is an expert in this aspect of immigration policy, you might be rewarded with a good lesson. Otherwise, you might lead her to expose the Unread Library Effect because you started by modeling ignorance. Should your conversation partner begin to question her expertise and discover the Unread Library Effect, let its effects percolate. Do not continue to pepper her with questions.

It’s worth repeating that this strategy not only helps moderate strong views, it models openness, willingness to admit ignorance, and readiness to revise beliefs. Modeling intellectually honest ignorance is a virtue that seasoned conversation partners possess – and it is fairly easy to achieve.

Commenter Ayy_2_Brute recalls how effective this approach can be from personal experience:

My father had this habit that my mother’s constantly reminded me of since he died. No matter how well versed he was on a subject, he always let the other person do the talking first, then would ask them questions. Not only was he able to learn more, but it naturally made him entertain opinions contrary to his own, making him a much more open and humble person. If the person didn’t know what they were talking about, the discussion didn’t descend into an argument, they’d just slowly realize they’re talking to someone much more informed based solely on the questions he was asking. Which in turn made them a lot more open to receiving an alternate opinion as well.

If you’re in a debate with someone and you just sit back and let them explain their reasoning in as much detail as possible, without interrupting or challenging them until they’re finished, in many cases your attentive silence can make them even more self-conscious about what they’re saying than any counterargument ever could. They may start second-guessing whether their justifications really sound as strong out loud as they seemed to be in their heads – and the fact that you aren’t trying to force them into this self-questioning can make them feel more willing to admit it to themselves without feeling like they’re conceding anything to you or making themselves look foolish by losing an argument.

That, by the way, is another crucial point when it comes to changing someone’s mind – you’re a lot more likely to be successful in influencing your opponent’s thinking if you can adopt a gracious, unassuming attitude which allows them to admit where their arguments are weak (and maybe even make concessions on those weak points) without feeling like they’re losing face, or like you’re going to taunt and humiliate them for being wrong. Sun Tzu famously advised in The Art of War: “When you surround the enemy, always allow them an escape route. They must see that there is an alternative to death.” And this is good advice for ideological debates as well. If your opponent doesn’t see any viable “outs” – and they know that defeat will lead to nothing but humiliation and damage to their side’s cause – then they’ll naturally fight even more ferociously against you, and inflict even heavier costs on your side, than if you had just left them a line of retreat. Giving them a way out of their position, and making sure it’s one that allows them to save face, enables them to come around to your side without feeling like they’ve “lost.” In some cases, it may even prompt them to adopt your ideas more strongly than they might otherwise, in an effort to demonstrate that they actually agreed with you all along and weren’t really conceding anything in the first place. This is a win-win for everyone involved.

In short, then, the fundamental key to winning someone over is that you want to make it as easy as possible for them to agree with you – and the way to do this by removing the mental obstacles that might cause them to subconsciously resist. If you go into a conversation fully aware of your opponent’s desire to avoid embarrassment, you can try to lower the costs to them of admitting when they’re wrong. Or similarly, if you recognize the fact that they won’t want to give any satisfaction to someone who rubs them the wrong way, you can try to be the type of person that they’ll want to get along with and reach a common understanding with. Commenter namethatisntaken probably speaks for all of us on this point:

My ability to admit I was wrong is largely determined by the attitude of the person I’m arguing with.

And Haidt elaborates:

The mind is divided, like a rider on an elephant, and the rider’s job is to serve the elephant. The rider is our conscious reasoning – the stream of words and images that hogs the stage of our awareness. The elephant is the other 99 percent of mental processes – the ones that occur outside of awareness but that actually govern most of our behavior.

[…]

When does the elephant listen to reason? The main way that we change our minds on moral issues is by interacting with other people. We are terrible at seeking evidence that challenges our own beliefs, but other people do us this favor, just as we are quite good at finding errors in other people’s beliefs. When discussions are hostile, the odds of change are slight. The elephant leans away from the opponent, and the rider works frantically to rebut the opponent’s charges.

But if there is affection, admiration, or a desire to please the other person, then the elephant leans toward that person and the rider tries to find the truth in the other person’s arguments. The elephant may not often change its direction in response to objections from its own rider, but it is easily steered by the mere presence of friendly elephants (that’s the social persuasion link in the social intuitionist model) or by good arguments given to it by the riders of those friendly elephants (that’s the reasoned persuasion link).

There is a critically important truth here, which is that people’s ideological positions aren’t always determined by specific facts and beliefs about a topic, so much as by their broader overall attitudes toward that topic. Just laying out all the facts may not be enough to change a person’s mind, because if they hold a fundamentally negative underlying attitude toward your side, they’ll want to resist coming over to it even when the facts say they should. If, for instance, someone just has a deep-seated gut intuition that conservatism is callous and closed-minded, or that liberalism is self-indulgent and irresponsible, you can show them all the facts and figures you want demonstrating the benefits of your conservative tax plan or your liberal social policy or whatever – and superficially, they may freely accept the validity of these points – but if their fundamental attitude toward the concept of “liberalism” or “conservatism” as a whole is just too loaded with negative connotations for them to want to get on board with it, then their overall worldview won’t shift much. This is also why certain concepts like “government spending” can be such automatic conversation-stoppers for conservatives, or why liberals instantly bristle at anything to do with big corporations, or why religious believers won’t even touch the word “atheism” with a twenty-foot pole. When you try to debate them about these particular subjects, you’re no longer dealing with neutral facts and beliefs, you’re dealing with attitudes – and changing someone’s attitude toward a topic is a much longer and more involved process than getting them to accept one specific fact or idea. (In fact, people’s specific beliefs can actually be surprisingly fluid and malleable (recall the Cohen study mentioned earlier, where people switched their stances at the drop of a hat just because they thought their “side” held a different stance than it did); it’s just their allegiance to their chosen side that’s so rigid and hard to budge.)

For this reason, if you’re trying to produce some kind of large-scale social change, it’s a good strategic idea to start with certain very specific points that people can easily get on board with, and then take baby steps from there – approaching it as a gradual long-term process – rather than trying to instantly shift their entire attitude in one fell swoop. Converting someone from a full-fledged Marxist into a full-fledged capitalist overnight, or from a religious fundamentalist into a secularist over the course of a single conversation, is just a bridge too far in most cases – those beliefs have been accumulated over an entire lifetime, and you can’t presume to be able to just reverse them instantaneously, no matter how good your arguments are. It’d be like trying to get someone to walk the length of an entire football field in a single step. But if you’re only leading them along one step at a time, then that’s a different story; it’s a lot easier to get someone to accept a single point for your side than to accept the whole package. The key is not to ask too much of them all at once – start with the parts of your ideology that will be easiest for them to get on board with, and promote those first. Then, once they’ve gotten used to the feeling of agreeing with you on something, and have had a while to integrate that new position into their own worldview, you can try to persuade them of another point, and then another, and so on – until ultimately they’re a lot closer to being on your side than when they started.

Frum explains how this difference in strategy can mean the difference between success and failure for mass socio-political movements:

The classic military formula for success: concentrate superior force at a single point. The Occupy Wall Street movement fizzled out in large part because of its ridiculously fissiparous list of demands and its failure to generate a leadership that could cull that list into anything actionable. Successful movements are built upon concrete single demands that can readily be translated into practical action: “Votes for women.” “End the draft.” “Overturn Roe v. Wade.” “Tougher punishments for drunk driving.”

People can say “yes” to such specific demands for many different reasons. Supporters are not called upon to agree on everything, but just one thing. “End the draft” can appeal both to outright pacifists and to military professionals who regard an army of volunteers as more disciplined and lethal than an army of conscripts. Critics of Roe run the gamut from those who wish a total ban on all abortions to legal theorists who believe the Supreme Court overstepped itself back in 1973.

[…]

These are limited asks with broad appeal.

On the other hand, if you build a movement that lists those specific and limited goals along a vast and endlessly unfolding roster of others from “preserve Dodd Frank” to “save the oceans” – if you indulge the puckish anti-politics of “not usually a sign guy, but geez” – you will collapse into factionalism and futility.

[…]

If you are building a movement […] you should remember that the goal is to gain allies among people who would not normally agree with you.

Commenter snyderjw puts it this way:

A wise old man once told me “my advice to revolutionaries is this, you have to run your revolution to win the love of an honest square… we finally defeated segregation and the Vietnam war when the men in suits started joining the protests on their lunch break.”

Granted, if you’re passionate about your cause and you think the ideas you’re promoting are obvious and irrefutable, it can be frustrating to feel like you have to dumb things down and put training wheels on your ideas just to convince the less-informed hoi polloi to get with the program already. But like it or not, it’s a necessary process, because not everyone has experienced the same things you’ve experienced over the course of their lives, and not everyone has been exposed to the same sources of information you have. Just as you can’t blame somebody for not following your religion if they grew up in a part of the world that has never been exposed to your religion before, you can’t just automatically assume that someone who has spent their whole lives immersed in one ideological narrative will immediately be able to get on the same level as someone who’s spent their entire lives immersed in the opposite narrative at the drop of a hat. T1J shares his thoughts on the subject:

Sometimes it’s hard to comprehend other people’s ideas. We just can’t imagine how and why some people believe the things they do. The correct view is so obvious to us, and we either assume that people are just lost and will never change their mind, or that we can somehow convince them to do a complete mental 180. Both of these are possible. But neither really reflect how most people actually are. People are usually hesitant to flat-out admit that they were wrong. But we can add nuance to someone’s view by offering a different perspective. The problem is that we sometimes think people should naturally understand things in the way that we do, so we’re confused when they aren’t very receptive to our ideas. If you’re trying to teach your old racist grandma who grew up in Jim Crow about racial microaggressions, it’s likely that she’s not going to be eye-to-eye with you. (Get it together, Grandma!) It’s possible that it’s a lost cause. But maybe you can find an alternate route towards getting her to understand. The fact of the matter is, sometimes you have to meet people where they are, rather than demanding that they catch up to you.

[…]

Speaking very generally, there are at least two types of social justice advocates on the internet. There are people who work with others to discuss effective solutions to the problems that society faces, and on the other hand there are people who don’t seem to be really interested in actually solving problems, and just kind of want to express their frustration and call people out. Now in many ways, that frustration is valid and justifiable. But in my opinion, you shouldn’t expect angry confrontation to lead to very much actual progress. But if you’re just here to just sort of yell at people, then… carry on I guess? But this [advice] isn’t really for you. This [advice] is for that first group – people that actually want to find solutions to both societal and personal conflicts.

I think a lot of us have this delusion that we’re going to convince other people to just suddenly wake up, like they’re going to have a light switch flipped in their brain overnight and come to realize that we were right all along, and then they’ll join us on the front line marching for freedom. And then we have this principled stubbornness, where it’s like, “Well if they can’t understand that they’re wrong, then fuck ‘em. The people who are right will win in the end anyway. They’ll just have to be on the wrong side of history.” And it’s true that some people have no interest in being informed or expanding their perspective. But it’s also true that some people just haven’t been engaged properly. Now, I believe that the world seems to slowly get more progressive over time. But I feel like proper advocacy involves doing our best to make our world a little bit better for this generation, not just future ones. And that’s got to involve getting out there and touching people’s hearts and minds. But everybody is on a different step in their journey towards enlightenment. Some people need just a little nudge in the right direction, while others probably need to be tossed a larger bone.

[…]

So for example, a thing that you often hear is, “You should respect women, because that woman is someone’s mother, or daughter, or wife, etc.” And this is kind of obnoxious, because it’s like, you should respect women because in addition to being wives and daughters and mothers, they’re also people and they don’t deserve to be mistreated. Like, you shouldn’t have to invoke familial relationship to a woman in order to understand why you shouldn’t be shitty to them. And that is 100% true. But if the goal is getting people to appreciate and respect women, and an effective context in which we can convince people to do that is reminding them of their relationships with the women in their own family, I feel like you should take the small victory. Not everyone is going to gain a sophisticated insight overnight. Sometimes we have to let people use training wheels until they catch up. And if we create this culture where anything less than perfection causes you to be dismissed and dragged regardless of your intentions, that just seems to be a very good way to alienate potential allies – which, if your goal is progress, is not what you want to be doing.

A couple months ago, there was a viral video on Twitter of this guy who was protesting outside of a Roy Moore rally. […] This guy was protesting Roy Moore’s homophobic remarks in honor of his gay daughter who had committed suicide. In the video he implies that at one point he didn’t accept his daughter’s homosexuality: “I was anti-gay myself; I said bad things to my daughter myself, which I regret.” The video is very moving in my opinion, and I’m kind of even getting emotional thinking about it, and it got a very positive response. But I did see a bunch of comments talking about how shitty it is that a gay person had to die before they were recognized as legitimate. And I mean, that’s a fair point. But first of all, this is a grieving father – like, back up for a minute. Secondly, this guy has probably lived his whole life in a homophobic environment, and it took something tragic to get him to reconsider his views. It’s terrible that he had to go through that, but he’s on the verge of a breakthrough – this is not the time to antagonize him. He’s probably not going to be marching with rainbow flags anytime soon, but he can share his story with his community and help bridge the gap. He could tell his friends to chill out when they’re using homophobic slurs or making shitty jokes. He could be a friend to closeted people down at the farm in Wicksburg, Alabama. I don’t know if he’s going to do any of these things. I’m just saying he’s less likely to if we immediately dogpile him for not being woke enough.

So here’s my thing: I understand that a lot of this is just wacky people on social media being mean just for the sake of doing it. One of the biggest lessons that I’ve learned is that Twitter doesn’t necessarily reflect the actual state of our society and our movement in reality. But I do think that there’s a notable segment of activists, both on and offline, who claim to want progress and change, but seem to be more concerned with dismissing people they deem to be “not on their level” than they are with actually trying to help people get there. And like I said, if that’s what you want to do, I think that’s unfortunate, but it’s not really my place to tell you not to. I just don’t think it does anything. In fact, it’s probably actively harmful to the movement. And again, some people clearly have no intention of engaging ideas in good faith or considering the possibility that they might be wrong about something. And it’s actually important for us to develop the ability to identify when that’s happening so we don’t waste time arguing with brick walls. The willingness to open ourselves to new ideas is a step that we all have to take on our own. No one can force us to do that. But at the same time, we can help people find their way to that door if we’re a little more patient, and take the time to meet them where they are in their path towards understanding.

In short, you can’t necessarily grade everyone on the same curve, so to speak. You have to be able to recognize areas where their understanding might not be in the same ballpark as yours, and make mental allowances for those differences. This can be hard, no doubt. If the person you’re debating just keeps spouting off nonsense with all the confidence in the world and doesn’t even seem aware that other opinions exist, it can be downright maddening. But if they already shared your beliefs, there wouldn’t be any need to have the conversation with them in the first place, because they’d already be on the same page as you. By definition, getting them to that point means that when you start off the conversation, the two of you won’t be on the same page – and it’s only through patient dialogue that you’ll be able to close the gap. You won’t win anyone to your side, if you’re a Christian, by preaching to the choir; and you won’t win anyone to your side, if you’re a social justice liberal, by only interacting positively with other social justice liberals. You only make progress for your cause by engaging with people whose opinions seem outrageous or absurd to you, and maintaining your composure for long enough to persuade them otherwise. Alexander provides a parable:

The Emperor summons before him Bodhidharma and asks: “Master, I have been tolerant of innumerable gays, lesbians, bisexuals, asexuals, blacks, Hispanics, Asians, transgender people, and Jews. How many Virtue Points have I earned for my meritorious deeds?”

Bodhidharma answers: “None at all”.

The Emperor, somewhat put out, demands to know why.

Bodhidharma asks: “Well, what do you think of gay people?”

The Emperor answers: “What do you think I am, some kind of homophobic bigot? Of course I have nothing against gay people!”

And Bodhidharma answers: “Thus do you gain no merit by tolerating them!”

[…]

The best thing that could happen to this post is that it makes a lot of people, especially myself, figure out how to be more tolerant [of their ideological out-groups]. Not in the “of course I’m tolerant, why shouldn’t I be?” sense of the Emperor. […] But in the sense of “being tolerant makes me see red, makes me sweat blood, but darn it I am going to be tolerant anyway.”

Again, this doesn’t mean you have to pretend that your opponents’ views are just as correct and valid as yours. If you thought that, then you wouldn’t have any reason to prefer your own beliefs over theirs. All it means is that you should recognize the sincerity of their beliefs – and understand that if they’re wrong, you can and should try to persuade them as you would a friend, rather than blowing them off as brainless idiots or trying to bully them into submission. The most impressive people (for my money, at least) are those who try to be kind and understanding toward everyone, not just toward the people who are easy to be kind and understanding toward.

Here’s T1J again:

“So I have to, like, walk on eggshells and tiptoe around everything I say and everything that I do?”

Yes. It’s called being a thoughtful, considerate person. You should go out of your way to avoid fucking up other people’s lives. You should want to do that. And I promise you, it’s not that hard.

At the very minimum, try this: Whenever you discuss your views – whether it’s with someone who agrees with you or someone who disagrees with you – see if you can do so without at any point expressing disdain for the other side (especially if you find their opinions particularly contemptible). If you don’t entirely understand the justifications behind your opponents’ views, it’s perfectly fine to be open about that; if you think there might be a certain flaw in their reasoning, it’s OK to say so and carefully explain why. But see if you can actually explain yourself without expressing anything that might even be perceived by the other side as dismissive eye-rolling or impatient tetchiness. This approach will, at the very least, ensure that the discussion won’t devolve into a total train wreck that does more harm than good and causes both sides to dig in their heels even deeper.

Of course, there’s no good reason just to settle for this bare minimum alone. If you really care about the topics being debated, you should want to do better than “not an overall negative” – you should want your conversations to be an overall positive. “Not counterproductive” is great; but “productive” is even better. The way to achieve legitimately productive discourse, then, is to demonstrate not only that you’re willing to patiently listen to your opponents’ arguments, but that you actually understand their reasoning perfectly and can recognize exactly where their ideas are most compelling, before you even begin to think of refuting them. This concept is known in philosophy as the Principle of Charity, defined as “interpreting a speaker’s statements in the most rational way possible and, in the case of any argument, considering its best, strongest possible interpretation.” Alexander puts it this way:

[The] Principle of Charity […] says you should always assume your ideological opponents’ beliefs must make sense from their perspective. If you can’t even conceive of a position you oppose being tempting to someone, you don’t understand it and are probably missing something. You might be missing a strong argument that the position is correct. Or you might just be missing something totally out of left field.

It’s not always easy to imagine someone else’s mindset as accurately as possible – the natural urge to construe their ideas as weak and faulty (and therefore easy to beat) can sometimes be almost impossible to override without conscious effort. But if you’re able to consistently adhere to the Principle of Charity, it’s one of the best ways to ensure that your ideological debates will actually get somewhere productive, because it gives you a more accurate understanding of what foundations your opponents’ arguments are actually resting on, and which points therefore need to be addressed in order to influence their views. If you’re just trying to argue against some misconstrued ideas that your opponents don’t actually hold (i.e. committing a straw man fallacy), then you’re trying to resolve a problem that doesn’t actually exist. You’re like the proverbial drunk who loses his keys in the dark bushes but looks for them under the streetlight instead because “the light is better there.” Sure, it would be easier to find the keys if they were there – just like it would be easier to refute your opponents’ ideas if they actually were the egregious caricatures you’d like to imagine them to be – but the fact that they aren’t means that any effort toward that end is a misdirected waste of time. On the other hand, if you’re able to accurately articulate the reasoning behind your opponents’ arguments, then it gives you legitimate grounds to be able to dissect and refute them. It’s a lot more convincing for your opponents to hear their arguments refuted by someone who understands the arguments perfectly – maybe even better than they do – and still doesn’t think they’re strong enough to hold water, than to hear those same arguments refuted by someone who doesn’t seem to “get it” at all and is just reciting their own side’s talking points.

Untitled

So just to take one common example, if you’re an abortion activist who stubbornly maintains that the only reason anyone could possibly be opposed to abortion is that they want to control women – or that the only reason anyone could possibly be in favor of abortion is that they want to kill babies – then you won’t convert many people to your cause if the desire to control women or kill babies is not, in fact, the thing motivating their views. On the other hand, if you recognize that your opponents’ positions actually rely fundamentally on whether they consider fetuses to be the moral equivalent of full-grown people, and then you address those assumptions in a proficient way, it’s much more likely that your arguments will have a real effect. By addressing the more charitable interpretation of your opponents’ views, rather than some demonized version that they don’t actually subscribe to, you’re effectively striking at the roots of their beliefs, and not just hacking away at the shadows of the branches.

Fisher and Ury write in their famous guide to negotiation:

Many consider it a good tactic not to give the other side’s case too much attention, and not to admit any legitimacy in their point of view. A good negotiator does just the reverse. Unless you acknowledge what they are saying and demonstrate that you understand them, they may believe you have not heard them. When you then try to explain a different point of view, they will suppose that you still have not grasped what they mean. They will say to themselves, “I told him my view, but now he’s saying something different, so he must not have understood it.” Then instead of listening to your point, they will be considering how to make their argument in a new way so that this time maybe you will fathom it. So show that you understand them. “Let me see whether I follow what you are telling me. From your point of view, the situation looks like this. . . .”

As you repeat what you understood them to have said, phrase it positively from their point of view, making the strength of their case clear. You might say, “You have a strong case. Let me see if I can explain it. Here’s the way it strikes me. . . . ” Understanding is not agreeing. One can at the same time understand perfectly and disagree completely with what the other side is saying. But unless you can convince them that you do grasp how they see it, you may be unable to explain your viewpoint to them. Once you have made their case for them, then come back with the problems you find in their proposal. If you can put their case better than they can, and then refute it, you maximize the chance of initiating a constructive dialogue on the merits and minimize the chance of their believing you have misunderstood them.

Daniel Dennett reiterates this point:

Just how charitable are you supposed to be when criticizing the views of an opponent? If there are obvious contradictions in the opponent’s case, then you should point them out, forcefully. If there are somewhat hidden contradictions, you should carefully expose them to view – and then dump on them. But the search for hidden contradictions often crosses the line into nitpicking, sea-lawyering and outright parody. The thrill of the chase and the conviction that your opponent has to be harboring a confusion somewhere encourages uncharitable interpretation, which gives you an easy target to attack. But such easy targets are typically irrelevant to the real issues at stake and simply waste everybody’s time and patience, even if they give amusement to your supporters. The best antidote I know for this tendency to caricature one’s opponent is a list of rules promulgated many years ago by social psychologist and game theorist Anatol Rapoport (creator of the winning Tit-for-Tat strategy in Robert Axelrod’s legendary prisoner’s dilemma tournament).

How to compose a successful critical commentary:

  1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.”
  2. You should list any points of agreement (especially if they are not matters of general or widespread agreement).
  3. You should mention anything you have learned from your target.
  4. Only then are you permitted to say so much as a word of rebuttal or criticism.

One immediate effect of following these rules is that your targets will be a receptive audience for your criticism: you have already shown that you understand their positions as well as they do, and have demonstrated good judgment (you agree with them on some important matters and have even been persuaded by something they said).

This technique, of laying out your opponents’ arguments not only in a way they’d agree is accurate, but in a way that is actually more compelling than how they themselves are making their case, is often referred to as creating a steel man (i.e. the opposite of creating a straw man). Chana Messinger explains the logic of this approach:

“The beginning of thought is in disagreement – not only with others but also with ourselves.” – Eric Hoffer

You know when someone makes an argument, and you know you can get away with making it seem like they made a much worse one, so you attack that argument for points? That’s strawmanning. Lots of us have done it, even though we shouldn’t. But what if we went one step beyond just not doing that? What if we went one better? Then we would be steelmanning, the art of addressing the best form of the other person’s argument, even if it’s not the one they presented. Mackenzie McHale, from the Newsroom, puts it on her list of Very Important Things for journalists (#2), and it would serve us well, too.

newsnight

Text: Newsnight 2.0 Rules: 1. Is this information we need in the voting booth? 2. Is this the best possible form of the argument? 3. Is the story in historical context?

Why should we do this? Three reasons: It makes us better rationalists, better arguers, and better people.

1. Better rationalists: I, and all of you, I think, care a great deal about what is true. One of the ways we find out what is true is to smash our arguments against each other and see what comes out, abandoning the invalid arguments and unsound conclusions for better and brighter ideas as we march towards Truth. Perhaps the greatest limitation on this method is the finitude of the arguments we can possibly encounter. By chance, we may never be exposed to good arguments for other positions or against our own, in which case we may wrongly but reasonably discount other positions as unsupported and incorrect, and we would never know.

So we need to find better arguments. Where? Well, aside from sitting in rooms alone arguing with ourselves (guilty), we have the opportunity to construct these better arguments every time we are arguing with someone. We probably know best which arguments are most difficult for our position, because we know our belief’s real weak points and what kind of evidence we tend to find compelling. So I challenge you, when arguing with someone, to use that information to look for ways to make their arguments better, more difficult for you to counter. This is the highest form of disagreement.

If you know of a better counter to your own argument than the one they’re giving, say so. If you know of evidence that supports their side, bring it up. If their argument rests on an untrue piece of evidence, talk about the hypothetical case in which they were right. Take their arguments seriously, and make them as good as possible. Because if you can’t respond to that better version, you’ve got some thinking to do, even if you are more right than the person you’re arguing with. Think more deeply than you’re being asked to.

Do what fictional Justice Mulready does here:

In this way, you both learn, and you’re having discussions of the highest level you’re capable of, really grappling with the ideas instead of bringing up rehearsed points and counterpoints. It is a difficult task, but it forces us to face those arguments that might actually pose problems for us, instead of just what we happen to see around us. This ensures that we have the right answer, not just a successful answer.

2. Better arguers: But Chana, you might say, I’m actually trying to get something done around here, not just cultivate my rationalist virtue or whatever nonsense you’re peddling. I want to convince people they’re wrong and get them to change their minds.

Well, you, too, have something to gain from steelmanning.

First, people like having their arguments approached with care and serious consideration. Steelmanning requires that we think deeply about what’s being presented to us and find ways to improve it. By addressing the improved version, we show respect and honest engagement to our interlocutor. People who like the way you approach their arguments are much more likely to care about what you have to say about those arguments. This, by the way, also makes arguments way more productive, since no one’s looking for easy rebuttals or cheap outs.

Second, people are more convinced by arguments which address the real reason they reject your ideas rather than those which address those aspects less important to their beliefs. If nothing else, steelmanning is a fence around accidental strawmanning, which may happen when you misunderstand their argument, or they don’t express it as well as they could have. Remember that you are arguing against someone’s ideas and beliefs, and the arguments they present are merely imperfect expressions of those ideas and beliefs and why they hold them. To attack the inner workings rather than only the outward manifestation, you must understand them, and address them properly.

3. Better people: I’m serious. I think steelmanning makes you a better person. It makes you more charitable, forcing you to assume, at least for a moment, that the people you’re arguing with, much as you ferociously disagree with them or even actively dislike them, are people who might have something to teach you. It makes you more compassionate, learning to treat those you argue with as true opponents, not merely obstacles. It broadens your mind, preventing us from making easy dismissals or declaring preemptive victory, pushing us to imagine all the things that could and might be true in this beautiful, strange world of ours. And it keeps us rational, reminding us that we’re arguing against ideas, not people, and that our goal is to take down these bad ideas, not to revel in the defeat of incorrect people.

Try it. It might just be more challenging, rewarding and mind-expanding than you expect.

It’s easy to want to focus on the low-hanging fruit and only contend with the most ridiculous and wrongheaded arguments against your worldview. But limiting yourself to the “lowest common denominator” level of the discourse only means that you’re preventing yourself from being able to participate in the higher-level exchanges of ideas where the real intellectual progress is made. As Tyler Cowen puts it:

Every movement…has a smart version and a stupid version, I try to (almost) always consider the smart version. The stupid version is always wrong for just about anything.

If you focus on the stupid version, you too will end up as the stupid version of your own movement.

Alexander elaborates, pointing out that if you only focus on your opponents’ worst arguments, it can desensitize you to their legitimate points which might actually be worth engaging:

Inoculation is when you use a weak pathogen like cowpox to build immunity against a stronger pathogen like smallpox. The inoculation effect in psychology is when a person, upon being presented with several weak arguments against a proposition, becomes immune to stronger arguments against the same position.

Tell a religious person that Christianity is false because Jesus is just a blatant ripoff of the warrior-god Mithras and they’ll open up a Near Eastern history book, notice that’s not true at all, and then be that much more skeptical of the next argument against their faith. “Oh, atheists. Those are those people who think stupid things like Jesus = Mithras. I already figured out they’re not worth taking seriously.” Except on a deeper level that precedes and is immune to conscious thought.

If you’re really serious about your ideas, then, you should scrupulously try to avoid such mental shortcuts; you should instead seek to grapple with the strongest counterarguments you can find. You shouldn’t convince yourself that you understand what the other side believes simply because of what you’ve heard from your own side about what the other side believes (typically easily-refuted caricatures). And you shouldn’t just seek out a handful of the least competently-made arguments from the other side and then declare victory once you’ve refuted them. You should actually make an effort to listen to the smartest people on the other side directly, until you get to the point where you’re confident that you genuinely do understand exactly where they’re coming from. As Ben Casnocha writes:

I have yet to find a more efficient and reliable way to probe the depths of a person’s knowledge and seriousness about an issue than asking them to explain the other side’s perspective.

And if you’re truly serious about your ideas, you can go even further than that. As Yudkowsky writes:

The economist Bryan Caplan invented an improved version of steelmanning called the Ideological Turing Test. In the Ideological Turing Test, you must write an argument for an opposing position which is realistic enough that an adherent of the position cannot tell the difference between what you have written, and something that was written by an actual advocate. The Ideological Turing Test is stricter than ‘steelmanning’, since it is far too easy to persuade yourself that you have generated the ‘strongest argument’, and much less easy to fool someone who actually believes the opposing position into thinking that you were sincerely doing your best to advocate it. It is a test of understanding; a trial to make sure you really understand the arguments you say you don’t believe.

People fail the Ideological Turing Test […] because they’re attached to their own mental moorings, because they fear the violation of letting themselves see the universe from another viewpoint, because they plain lack practice at imagining that another viewpoint might also think itself justified.

But if you can overcome these difficulties, this technique is the strongest way of using the Principle of Charity to its fullest potential. True, it may still be possible in some cases to get away with just cynically pretending to understand your opponents’ arguments in order to manipulate them into agreeing with you. But more often, taking that approach will just cause your opponents to pick up on your insincerity and resent you for it. If you really want to have a productive conversation, you have to honestly, genuinely understand the other side. And if you can do that, then although you may still disagree with your opponents in the end, you will at least be able to identify exactly which of their perceptions and intuitions differ from your own and how those differences lead to your contrasting worldviews. As Fisher and Ury put it:

At the very least, if you and the other side cannot reach first-order agreement, you can usually reach second-order agreement – that is, agree on where you disagree, so that you both know the issues in dispute, which are not always obvious.

T1J explains this process in more detail:

There’s a common misconception that the point of a debate is to change someone’s view on a topic. But seeing as how debates often, if not usually, cover subjective topics about which people have strong convictions, changing someone’s view on a topic is something that very rarely happens. Personally, I think a better goal than changing someone’s view is to find what I like to call the Fundamental Disagreement.

Okay, now first of all, any good argument is based on facts. If you have an argument but it’s not based on facts, then you already lose; you’re already doing it wrong. But the amount of facts that we know is finite. So I would argue that every position and every opinion can ultimately be traced back to one premise (or premises) that is accepted to be true without necessarily being proven as such. So the goal of the debate should be to find that fundamental premise that you and your opponent disagree on. And since that premise is often something that can’t conclusively be supported or refuted, usually there’s nothing more to argue after that.

For example, I often find myself in religious debates, as you might imagine. And much of what I object [to] about religion is based on science. So most of the arguments I make are hinged upon the idea that science is valid and reliable. Now, I think science has a good track record, but I can’t necessarily prove that science will always be reliable, and someone else can’t prove that it will not be. So if you disagree that science is reliable, then that’s the Fundamental Disagreement. We can’t argue about that because we can’t prove it either way. Now, we can have a chat about the fundamental nature of existence, and what’s reliable and what’s not, but that’s all very metaphysical; and metaphysical discussions, while interesting, don’t really get us anywhere. But a good thing about using this method of debate is that sometimes before you even get to that Fundamental Disagreement, you might find that one of your or your opponent’s premises is in fact incorrect – and then maybe someone will win the argument. (But like I said, I think that rarely happens.)

A lot of times, when you discover the fundamental source of disagreement between you and your opponent, it turns out that it’s not even necessarily a matter of one of you being completely right and the other being completely wrong; it’s more a matter of the two of you having been exposed to completely different sets of facts and influences, and therefore formulating worldviews which, despite making logical sense within their own isolated context, don’t fit the context of the other’s perspective. Alexander explains:

I read Atlas Shrugged probably about a decade ago, and felt turned off by its promotion of selfishness as a moral ideal. I thought that was basically just being a jerk. After all, if there’s one thing the world doesn’t need (I thought) it’s more selfishness.

Then I talked to a friend who told me Atlas Shrugged had changed his life. That he’d been raised in a really strict family that had told him that ever enjoying himself was selfish and made him a bad person, that he had to be working at every moment to make his family and other people happy or else let them shame him to pieces. And the revelation that it was sometimes okay to consider your own happiness gave him the strength to stand up to them and turn his life around, while still keeping the basic human instinct of helping others when he wanted to and he felt they deserved it (as, indeed, do Rand characters).

[…]

In a recent essay I complained about bravery debates, arguments where people boast about how brave they are to take an unorthodox and persecuted position, and their opponents counter that they’re not persecuted heretics, they’re a vast leviathan persecuting everyone else. But I think I underestimated an important reason why some debates have to be bravery debates.

Suppose there are two sides to an issue. Be more or less selfish. Post more or less offensive atheist memes. Be more or less willing to blame and criticize yourself.

There are some people who need to hear each side of the issue. Some people really need to hear the advice “It’s okay to be selfish sometimes!” Other people really need to hear the advice “You are being way too selfish and it’s not okay.”

It’s really hard to target advice at exactly the people who need it. You can’t go around giving everyone surveys to see how selfish they are, and give half of them Atlas Shrugged and half of them the collected works of Peter Singer. You can’t even write really complicated books on how to tell whether you need more or less selfishness in your life – they’re not going to be as buyable, as readable, or as memorable as Atlas Shrugged. To a first approximation, all you can do is saturate society with pro-selfishness or anti-selfishness messages, and realize you’ll be hurting a select few people while helping the majority.

But in this case, it makes a really big deal what the majority actually is.

Suppose an Objectivist argues “Our culture has become too self-sacrificing! Everyone is told their entire life that the only purpose of living is to work for other people. As a result, people are miserable and no one is allowed to enjoy themselves at all.” If they’re right, then helping spread Objectivism is probably a good idea – it will help these legions of poor insufficiently-selfish people, but there will be very few too-selfish-already people who will be screwed up by the advice.

But suppose Peter Singer argues “We live in a culture of selfishness! Everyone is always told to look out for number one, and the poor are completely neglected!” Well, then we want to give everyone the collected works of Peter Singer so we can solve this problem, and we don’t have to worry about accidentally traumatizing the poor self-sacrificing people more, because we’ve already agreed there aren’t very many of these at all.

It’s much easier to be charitable in political debates when you view the two participants as coming from two different cultures that err on opposite sides, each trying to propose advice that would help their own culture, each being tragically unaware that the other culture exists.

A lot of the time this happens when one person is from a dysfunctional community and suggesting very strong measures against some problem the community faces, and the other person is from a functional community and thinks the first person is being extreme, fanatical or persecutory.

This happens a lot among […] atheists. One guy is like “WE NEED TO DESTROY RELIGION IT CORRUPTS EVERYTHING IT TOUCHES ANYONE WHO MAKES ANY COMPROMISES WITH IT IS A TRAITOR KILL KILL KILL.” And the other guy is like “Hello? Religion may not be literally true, but it usually just makes people feel more comfortable and inspires them to do nice things and we don’t want to look like huge jerks here.” Usually the first guy was raised Jehovah’s Witness and the second guy was raised Moralistic Therapeutic Deist.

The point here is not that both sides are always equally correct, of course, or that it’s impossible to ever distinguish between competing truth claims. The point is just to illustrate why the Principle of Charity is so important when trying to determine where the truth really lies, because it’s so common for one or both sides of a debate to only have a partial view of the entire picture. The opposing sides may be operating from completely different baseline sets of facts and assumptions, and it’s only by making an earnest effort to get to the root of those contrasting worldviews that it’s possible to progress toward truth and common understanding. Here’s Alexander once again:

This […] ethos might be summed up as: charity over absurdity.

Absurdity is the natural human tendency to dismiss anything you disagree with as so stupid it doesn’t even deserve consideration. In fact, you are virtuous for not considering it, maybe even heroic! You’re refusing to dignify the evil peddlers of bunkum by acknowledging them as legitimate debate partners.

Charity is the ability to override that response. To assume that if you don’t understand how someone could possibly believe something as stupid as they do, that this is more likely a failure of understanding on your part than a failure of reason on theirs.

There are many things charity is not. Charity is not a fuzzy-headed caricature-pomo attempt to say no one can ever be sure they’re right or wrong about anything. Once you understand the reasons a belief is attractive to someone, you can go ahead and reject it as soundly as you want. Nor is it an obligation to spend time researching every crazy belief that might come your way. Time is valuable, and the less of it you waste on intellectual wild goose chases, the better.

It’s more like Chesterton’s Fence. G.K. Chesterton gave the example of a fence in the middle of nowhere. A traveller comes across it, thinks “I can’t think of any reason to have a fence out here, it sure was dumb to build one” and so takes it down. She is then gored by an angry bull who was being kept on the other side of the fence.

Chesterton’s point is that “I can’t think of any reason to have a fence out here” is the worst reason to remove a fence. Someone had a reason to put a fence up here, and if you can’t even imagine what it was, it probably means there’s something you’re missing about the situation and that you’re meddling in things you don’t understand. None of this precludes the traveller who knows that this was historically a cattle farming area but is now abandoned – ie the traveller who understands what’s going on – from taking down the fence.

As with fences, so with arguments. If you have no clue how someone could believe something, and so you decide it’s stupid, you are much like Chesterton’s traveler dismissing the fence (and philosophers, like travelers, are at high risk of stumbling across bull.)

I would go further and say that even when charity is uncalled-for, it is advantageous. The most effective way to learn any subject is to try to figure out exactly why a wrong position is wrong. And sometimes even a complete disaster of a theory will have a few salvageable pearls of wisdom that can’t be found anywhere else. [It’s often enlightening to rebuild] a stupid position into the nearest intelligent position and then [see] what you can learn from it.

Indeed, even in the cases where your opponent’s worldview might seem utterly mystifying, there is often at least one useful thing that you can take from it – one tiny kernel of underlying truth that motivated them to form their ideology in the first place. If you can maintain your patience and curiosity for long enough to find it, then even the most far-out debate can be a worthwhile experience. You might even discover some weaknesses in your own arguments that can be changed or improved.

XIV.

Within the kind of absolutist mindset that dominates today’s discourse, people often become so obsessed with trying to root out every miniscule offense against their side that they reflexively make snap judgments against anyone and anything that shows even a whiff of impurity. But as Storr points out:

[There’s a] kind of binary, dismissive thinking that I worry is evident among some Skeptics. In their haste to dismiss the ranting [zealots and quacks], subtler truths are being missed. […] Just because [one’s opponents] are wrong about one thing, it hasn’t necessarily followed that they are wrong about it all. And yet they are crucified for making one mistake.

T1J gives one example:

[It’s a problem] when you use the supposed failure of a movement as evidence that their entire cause should be discredited. During the African-American civil rights movement, the Nation of Islam was a corrupt black supremacist movement. But just because that group sucked didn’t change the fact that the problem they were fighting against existed.

And Alexander provides another example:

This is the same pattern we see in Israel and Palestine. How many times have you seen a news story like this one: “Israeli speaker hounded off college campus by pro-Palestinian partisans throwing fruit. Look at the intellectual bankruptcy of the pro-Palestinian cause!” It’s clearly intended as an argument for something other than just not throwing fruit at people. The causation seems to go something like “These particular partisans are violating the usual norms of civil discussion, therefore they are bad, therefore something associated with Palestine is bad, therefore your General Factor of Pro-Israeliness should become more strongly positive, therefore it’s okay for Israel to bomb Gaza.” Not usually said in those exact words, but the thread can be traced.

This kind of thinking goes back to the whole semantic net thing discussed at the very beginning of this post; rather than considering an isolated idea on its own merits, the more natural impulse is to judge it based on the entire bundle of other things that are also associated with that particular idea. It’s a kind of guilt-by-association approach that allows you to reject opposing ideas not just on an individual one-by-one basis, but on a wholesale basis, all at once. If your opponent is wrong about enough key points (or wrong in their tactics), then that’s enough to lump them into the mental category of “someone who’s wrong about things in general,” and not have to bother with anything else they have to say. (This also relates back to the inoculation effect mentioned before.)

Still though, as the old saying goes, even a broken clock is right twice a day. Or as Robert Pirsig puts it: “The world’s greatest fool may say the Sun is shining, but that doesn’t make it dark out.”

If you think about it purely in statistical terms, how likely can it really be that your opponents are all 100% wrong about everything 100% of the time, while you’re 100% right about everything 100% of the time? If you’ve had even the slightest bit of experience dealing with ideas before, the answer should be self-evident – after all, you’ve been mistaken so many times that obviously you can’t be 100% infallible; your opponents must at least have some things that they’re right about. But this concept is easier to accept in theory than in practice. Here’s Storr again:

I consider – as everyone surely does – that my opinions are the correct ones. And yet, I have never met anyone whose every single thought I agreed with. When you take these two positions together, they become a way of saying, ‘Nobody is as right about as many things as me.’ And that cannot be true. Because to accept that would be to confer upon myself a Godlike status. It would mean that I possess a superpower: a clarity of thought that is unique among humans. Okay, fine. So I accept that I am wrong about things – I must be wrong about them. A lot of them. But when I look back over my shoulder and I double-check what I think about religion and politics and science and all the rest of it… well, I know that I am right about that… and that… and that and that and – it is usually at this point that I start to feel strange. I know that I am not right about everything and yet I am simultaneously convinced that I am. I believe these two things completely, and yet they are in catastrophic logical opposition of each other.

If you allow yourself to get too caught up in the mistaken impression that you must be right about everything, it can lead you into logical traps. This is basically where the phenomenon of closed-mindedness comes from, as Tavris and Aronson point out:

[This mentality] creates a logical labyrinth because it presupposes two things: One, people who are open-minded and fair ought to agree with a reasonable opinion. And two, any opinion I hold must be reasonable; if it weren’t, I wouldn’t hold it. Therefore, if I can just get my opponents to sit down here and listen to me, so I can tell them how things really are, they will agree with me. And if they don’t, it must be because they are biased.

To quote yet another old saying, though:

It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.

The truth is, most people overestimate how correct they are about most things. As Alexander writes:

Nearly everyone is very very very overconfident. We know this from experiments where people answer true/false trivia questions, then are asked to state how confident they are in their answer. If people’s confidence was well-calibrated, someone who said they were 99% confident (ie only 1% chance they’re wrong) would get the question wrong only 1% of the time. In fact, people who say they are 99% confident get the question wrong about 20% of the time.

It gets worse. People who say there’s only a 1 in 100,000 chance they’re wrong? Wrong 15% of the time. One in a million? Wrong 5% of the time. They’re not just overconfident, they are fifty thousand times as confident as they should be.

This is not just a methodological issue. Test confidence in some other clever way, and you get the same picture. For example, one experiment asked people how many numbers there were in the Boston phone book. They were instructed to set a range, such that the true number would be in their range 98% of the time (ie they would only be wrong 2% of the time). In fact, they were wrong 40% of the time. Twenty times too confident! What do you want to bet that if they’d been asked for a range so wide there was only a one in a million chance they’d be wrong, at least five percent of them would have bungled it?

The problem, of course, is that although it may be easy to admit in principle that you can’t be right about everything, there’s no way of knowing which beliefs are actually the mistaken ones. From the inside, it feels like they’re all perfectly correct. So what can be done about this? One approach recommended by Alexander is to stop putting so much confidence in your “view from the inside” in the first place, and instead take a more meta-oriented approach, in which you not only consider the ideas themselves, but also your own certainty levels, from an outside perspective:

The Inside View is when you weigh the evidence around something, and go with whatever side’s evidence seems most compelling. The Outside View is when you notice that you feel like you’re right, but most people in the same situation as you are wrong. So you reject your intuitive feelings of rightness and assume you are probably wrong too. [An example] to demonstrate:

[…]

I feel like I’m an above-average driver. But I know there are surveys saying everyone believes they’re above-average drivers. Since most people who believe they’re an above-average driver are wrong, I reject my intuitive feelings and assume I’m probably just an average driver.

Applying this principle to ideological matters, he continues:

Every so often, I talk to people about politics and the necessity to see things from both sides. I remind people that our understanding of the world is shaped by tribalism, the media is often biased, and most people have an incredibly skewed view of the world. They nod their heads and agree with all of this and say it’s a big problem. Then I get to the punch line – that means they should be less certain about their own politics, and try to read sources from the other side. They shake their head, and say “I know that’s true of most people, but I get my facts from Vox, which backs everything up with real statistics and studies.” Then I facepalm so hard I give myself a concussion. This is the same situation where a tiny dose of Meta-Outside-View could have saved them.

Finally, he adds:

I started off by [writing] about “the principle of charity”, but I had trouble defining it and in retrospect I’m not that good at it anyway. What can be salvaged from such a concept? I would say “behave the way you would if you were less than insanely overconfident about most of your beliefs.” This is the Way. The rest is just commentary.

Cowen also has an interesting way of thinking about this. He points out that even if you have the most well-founded combination of beliefs possible, you’re still more likely than not to be wrong on some points, just as a matter of statistics:

We should be skeptical of ideologues who claim to know all of the relevant paths to making ours a better world. How can we be sure that a favored ideology will in fact bring about good consequences? Given the radical uncertainty of the more distant future, we can’t know how to achieve preferred goals with any kind of certainty over longer time horizons. Our attachment to particular means should therefore be highly tentative, highly uncertain, and radically contingent.

Our specific policy views, though we may rationally believe them to be the best available, will stand only a slight chance of being correct. They ought to stand the highest chance of being correct of all available views, but this chance will not be very high in absolute terms. Compare the choice of one’s politics to betting on the team most favored to win the World Series at the beginning of the season. That team does indeed have the best chance of winning, but most of the time our sports predictions are wrong, even if we are good forecasters on average [since even if the most-favored team has (say) a 25% chance of winning, that still means that the other 29 teams’ combined chances, despite each being less than 25% individually, will add up to 75% in aggregate, meaning that the actual most likely outcome is that one of those 29 other teams will end up winning]. So it is with politics and policy.

Our attitudes toward others should therefore be accordingly tolerant. Imagine that your chance of being right [in terms of your entire worldview, with all its thousands of constituent beliefs] is [higher than anyone else’s]. Yet there are many […] opposing [worldviews], so even if yours is best, you’re probably still wrong. Now imagine that your wrongness will lead to a slower rate of economic growth, a poorer future, and perhaps even the premature end of civilization (not enough science to fend off that asteroid!). That means your political views, though they are the best ones out there, will have grave negative consequences with [high] probability. […] In this setting, how confident should you really be about the details of your political beliefs? How firm should your dogmatism be about means-ends relationships? Probably not very; better to adopt a tolerant demeanor and really mean it.

As a general rule, we should not pat ourselves on the back and feel that we are on the correct side of an issue. We should choose the course that is most likely to be correct, keeping in mind that at the end of the day we are still more likely to be wrong than right. Our particular views, in politics and elsewhere, should be no more certain than our assessments of which team will win the World Series. With this attitude political posturing loses much of its fun, and indeed it ought to be viewed as disreputable or perhaps even as a sign of our own overconfident and delusional nature.

People like to act like whatever political or religious idea they’re espousing is a complete no-brainer, and that you’d have to be willfully ignorant not to see the blindingly obvious truth. But as Alexander points out (citing Yudkowsky), the fact that an issue is so contentious and so heavily debated suggests that in most cases, it’s not as self-evident as its supporters might believe it to be:

In one of the classics of the Less Wrong Sequences, Eliezer argues that policy debates should not appear one-sided. College students are pre-selected for “if they were worse they couldn’t get in, if they were better they’d get in somewhere else.” Political debates are pre-selected for “if it were a stupider idea no one would support it, if it were a better idea everyone would unanimously agree to do it.” We never debate legalizing murder, and we never debate banning glasses. The things we debate are pre-selected to be in a certain range of policy quality.

(To give three examples: no one debates banning sunglasses, that is obviously stupid. No one debates banning murder, that is so obviously a good idea that it encounters no objections. People do debate raising the minimum wage, because it has some plausible advantages and some plausible disadvantages. We might be able to squeeze one or two extra utils out of getting the minimum-wage question exactly right, but it’s unlikely to matter terribly much.)

He also explains how this effect can apply not only to theoretical policy debates, but also to individual news events:

The more controversial something is, the more it gets talked about.

A rape that obviously happened? Shove it in people’s face and they’ll admit it’s an outrage, just as they’ll admit factory farming is an outrage. But they’re not going to talk about it much. There are a zillion outrages every day, you’re going to need something like that to draw people out of their shells.

On the other hand, the controversy over dubious rape allegations is exactly that – a controversy. People start screaming at each other about how they’re misogynist or misandrist or whatever, and Facebook feeds get filled up with hundreds of comments in all capital letters about how my ingroup is being persecuted by your ingroup. At each step, more and more people get triggered and upset. Some of those triggered people do emergency ego defense by reblogging articles about how the group that triggered them are terrible, triggering further people in a snowball effect that spreads the issue further with every iteration.

Of course, needless to say, that doesn’t mean that every potentially contentious issue is necessarily a nuanced one with strong points on both sides. It’s totally possible to have a situation where one side of a debate is clearly the correct one, and yet the debate still doesn’t get resolved for some other reason. Maybe a particular idea really is obvious, for instance, but it just hasn’t been implemented yet because nobody cares about it all that much and it isn’t on anyone’s radar (aside from maybe certain narrow groups that have a vested interest in obstructing it). Abolishing the penny is one such niche idea that comes to mind. Or maybe there’s an idea that would be obvious to anyone with access to certain privileged information on the subject, but not everyone has access to such information. If you alone had been visited by aliens, for instance, the truth of their existence would be obvious to you but not to anyone else. Differences in religious belief and personal prejudice are other factors that can skew things for similar reasons.

Still, in most debates, where these special conditions don’t apply and the issues at hand are high-profile ones where everyone has access to roughly the same pool of information, it’s a good bet that you’re grossly oversimplifying the situation if you insist that the solutions are as obvious as prohibiting murder or not prohibiting sunglasses. They may seem obvious to you, but you can’t rightly call them obvious in a more general sense – because if they really were, then you wouldn’t be having to debate them in the first place.

Incidentally, this phenomenon also explains a lot about why the two main political parties in the US have the platforms that they do. Ever consider how strange it is that the country just happens to be almost exactly 50% liberal and 50% conservative on seemingly every major issue? That’s not because half the country has different beliefs from the other half, and by sheer coincidence the distributions of these beliefs all happen to break 50-50. It’s because the terms “liberal” and “conservative” are defined by the few points where the populace is split 50-50. In truth, on probably 99% of issues, everyone is pretty much in agreement. Murder should be illegal, everybody agrees. Sunglasses should be legal, everyone agrees. Nobody has to debate those issues where a popular consensus has already been reached, so the 1% of issues that are still unresolved and could go either way are the ones that the two parties triangulate their platforms around. In each of these edge cases, the two parties position themselves just slightly to the left and to the right of whatever the median voter’s position is, and adopt that position as their official platform – because if they positioned themselves at some more extreme point on the ideological spectrum where there wasn’t a 50-50 split in popular opinion (e.g. murder should be legal, sunglasses should be illegal, etc.), they would lose every election and their party would promptly go extinct. The fact that we have a status quo in which one party favors (say) legalizing marijuana, while the other wants to keep it illegal, is the only one that makes sense given the fact that the median voter currently falls roughly midway between these two positions. If either party suddenly decided to start advocating for a much more extreme position – e.g. that marijuana possession should be punishable by death, or that marijuana should be made mandatory and given to preschoolers – then the voting equivalent of natural selection would immediately wipe that party out of existence.

(Fun fact: This is also why you often see competing fast food restaurants located right next to each other; if either of them drifted too far away from the center of their particular area, they’d lose customers who were now closer to their competitors.)

It’s possible for the zeitgeist to shift over time, of course, and for the median position to become more liberal or more conservative than it used to be – and in fact we’ve seen this happen historically on almost every issue. But the point here is that as the median position shifts, the two parties’ platforms shift along with it – which is why mainstream conservatism no longer seeks to criminalize homosexuality, why mainstream liberalism no longer revolves around labor unions, and so on. The median opinion on those issues shifted in a new direction, and the two parties’ platforms shifted along with it. In political science, the formalized version of this is known as the Median Voter Theorem. The parties may be able to occasionally take stances that are a bit on the fringe – after all, most people don’t base their vote on just one issue, so pushing the margins on one or two issues won’t necessarily be a make-or-break prospect – but they can’t get too out of line or deviate from the mainstream on too many issues if they want to remain relevant. For better or for worse, popular consensus is what defines the range of viable political debate (also known as the Overton Window). And that means that if there’s a major public debate over a certain issue, you shouldn’t expect it to be one of the 99% of issues that have clear black-and-white answers that are universally obvious; there will probably be quite a bit of grey area involved, and each side will at least be able to make a case for their stance that’s reasonably plausible to the median voter.

In fact, when it comes to the really big political clashes that involve a lot of varied elements, it’s often the case that both sides have legitimate points in favor of their arguments, but are just focusing on subtly different aspects of the issue, so they appear to be more at odds with each other than they really are. You can have liberals lamenting the very real and serious problems of corporate consolidation and abuse of market inefficiencies, while conservatives decry the equally real instances of government waste and overregulation, and both sides can be correct – they’re just arguing two slightly different points. Yes, private-sector corruption is bad and should be reduced; and yes, public-sector corruption is also bad and should be reduced – but there’s no reason why these can’t both be true at the same time; it’s just that each side is emphasizing one over the other because it supports their narrative better.

(It’s for this reason, by the way, that you can often deduce a person’s general stance on an issue based simply on which part of that issue the person chooses to focus on when discussing it. If you’re discussing race and criminal justice, for instance, and the person’s first instinct is to start talking about gang violence and black-on-black crime, they’re probably a conservative – whereas if they start talking about police brutality and discriminatory sentencing practices, they’re probably a liberal. It’s not that either side is wrong – each of their chosen sub-topics is a legitimate problem that needs to be resolved – it’s just that these are two different questions, and the one they’re each choosing to focus on is the one that they feel bolsters their side more. (Incidentally, this is also why if you ever decide to focus on one of the issues that’s not one your side usually emphasizes – e.g. if you’re a liberal who wants to talk about crime in black neighborhoods, or if you’re a conservative who wants to talk about discriminatory sentencing practices – you may find yourself facing suspicions from your own side that you’ve secretly joined the enemy.) John Nerst has a must-read post on this whole dynamic here, explaining how something that one side might regard as a minor exception to the rule is often regarded by the other side as the very core of the issue, and vice-versa.)

There’s an old parable you might have heard of, about a group of blind men who encounter an elephant. Here’s the Wikipedia summary:

A group of blind men heard that a strange animal, called an elephant, had been brought to the town, but none of them were aware of its shape and form. Out of curiosity, they said: “We must inspect and know it by touch, of which we are capable”. So, they sought it out, and when they found it they groped about it. In the case of the first person, whose hand landed on the trunk, said “This being is like a thick snake”. For another one whose hand reached its ear, it seemed like a kind of fan. As for another person, whose hand was upon its leg, said, the elephant is a pillar like a tree-trunk. The blind man who placed his hand upon its side said the elephant “is a wall”. Another who felt its tail, described it as a rope. The last felt its tusk, stating the elephant is that which is hard, smooth and like a spear.

In some versions, the blind men then discover their disagreements, suspect the others to be not telling the truth and come to blows.

Ideological debates can be a lot like that. Each side is perfectly correct in their position, and thinks the other side must be either crazy or dishonest for disagreeing with them. But the other side is perfectly correct in their position too; it’s just that they’re concentrating on a different part of the whole.

If you can look past these different points of emphasis, though, and dig down to the underlying values motivating them, it actually turns out that liberals and conservatives often have more in common than they realize. Both groups, for instance, share the same core principle that it’s wrong for one group of people to dishonestly “game the system” and abuse government power to take hard-earned wealth from people who’ve actually worked for it and redistribute that wealth to themselves. The only difference is that conservatives are generally more inclined to think that the lower-income segments of society – welfare cheats, illegal immigrants, etc. – are the ones doing this, while liberals are more inclined to think that the wealthy – big corporations, bankers, CEOs, etc. – are the ones doing it. What they agree on is that ordinary working-class people are getting screwed, and are shouldering too much of the burden that should rightly be carried by others. And it’s the same story with a lot of other such issues; the two sides may differ in their perception of what the actual situation on the ground is, but if they could get on the same page regarding the facts – if they could both see the elephant in its entirety – then they wouldn’t have much left that they actually disagreed about. Their overall priorities, values, and interests would largely be the same.

Now having said that, of course, there are plenty of issues where this isn’t the case – at least not at the object level. There are some situations in which the opposing sides aren’t just looking at things from different perspectives and focusing on different areas, but actually have conflicting interests at stake. Things like differing incentives, imbalances of power, and mutually incompatible goals all come into play to some extent or another; and in some cases, these factors are decisive. So although it’s certainly a noble aim to try and resolve ideological disputes cooperatively in order to reach a common truth, it’s not always as simple as just dispassionately correcting the factual mistakes made by one or both sides until they’re both in complete agreement (an approach sometimes called “mistake theory”). Sometimes you have to take the extra step of accounting for divergent interests first – because otherwise, the two sides’ interests may just be irreconcilable (a school of thought known as “conflict theory”). Alexander elaborates on the differences between the “mistake theory” and “conflict theory” philosophies:

Mistake theorists treat politics as science, engineering, or medicine. The State is diseased. We’re all doctors, standing around arguing over the best diagnosis and cure. Some of us have good ideas, others have bad ideas that wouldn’t help, or that would cause too many side effects.

Conflict theorists treat politics as war. Different blocs with different interests are forever fighting to determine whether the State exists to enrich the Elites or to help the People.

Mistake theorists view debate as essential. We all bring different forms of expertise to the table, and once we all understand the whole situation, we can use wisdom-of-crowds to converge on the treatment plan that best fits the need of our mutual patient, the State. Who wins on any particular issue is less important creating an environment where truth can generally prevail over the long term.

Conflict theorists view debate as having a minor clarifying role at best. You can “debate” with your boss over whether or not you get a raise, but only with the shared understanding that you’re naturally on opposite sides, and the “winner” will be based less on objective moral principles than on how much power each of you has. If your boss appeals too many times to objective moral principles, he’s probably offering you a crappy deal.

Mistake theorists treat different sides as symmetrical. There’s the side that wants to increase the interest rate, and the side that wants to decrease it. Both sides have about the same number of people. Both sides include some trustworthy experts and some loudmouth trolls. Both sides are equally motivated by trying to get a good economy. The only interesting difference is which one turns out (after all the statistics have been double-checked and all the relevant points have been debated) to be right about the matter at hand.

Conflict theorists treat the asymmetry of sides as their first and most important principle. The Elites are few in number, but have lots of money and influence. The People are many but poor – yet their spirit is indomitable and their hearts are true. The Elites’ strategy will always be to sow dissent and confusion; the People’s strategy must be to remain united. Politics is won or lost by how well each side plays its respective hand.

It seems clear enough that a lot of ideological conflicts are based on the two sides simply having different object-level goals. Having said that, though, it does seem like people’s meta-level goals are almost always on more or less the same page – and that’s important. As Alexander points out, we all want to have a thriving economy, strong families, equal opportunity, freedom, justice, and so on. So there’s no reason to rule out reasoned discourse just because the two sides’ object-level interests might differ; all it means is that the reasoned discourse should be taking place at one meta-level higher, where everyone’s interests are the same, and the object-level asymmetries can be accounted for within that reasoned discourse, as just another one of the elements factoring into the debate. In other words, just because two sides’ interests are sometimes diametrically opposed in a zero-sum competition – like in a sports game – doesn’t mean that the two sides can’t still work cooperatively outside of that context to make sure that the game itself is set up in a fair way and that it’s optimally serving its intended purpose that they all desire to uphold (as sports leagues do routinely). Or to apply this logic to workers interacting with their bosses, for instance, it’s true that the imbalance of power will define the object-level debate of a worker asking for a raise, but that doesn’t mean that there’s nothing to be gained by taking the discussion up a meta-level and having a debate as a society over whether that power imbalance is justifiable in the first place, what the implications of that are, and so forth. People in different social positions, with different needs and preferences, can still work together and constructively debate how best to meet the fundamental terminal values that they all share. (And when they don’t – i.e. whenever you find two opposing sides who seemingly aren’t able to reach any kind of common understanding – it’s very often because they haven’t been able to zoom out to the appropriate meta-level, but have instead gotten stuck debating the object-level points of contention instead, which are downstream of where their fundamental disagreements actually lie. Instead of backing up and focusing on first principles first, and then going from there, opposing sides too often devote all their time and energy to arguing about the second-order issues that stem from those first-order assumptions – e.g. debating specific tax proposals instead of the fundamental philosophical justifications for taxation, or debating specific restrictions on abortion instead of the religious systems of morality underpinning those restrictions – and then wonder why those arguments never seem to get anywhere or change anyone’s mind. Ultimately, what they often conclude is that there’s simply no possible way of ever reaching any common ground; but that’s a mistake – their problem is simply that they’re trying to put the cart before the horse.)

Now again, needless to say, just because both sides of a debate may have understandable reasons for their positions, and just because it may be valuable to discuss those reasons, that doesn’t mean that both sides are always equally correct, or that there are always two equally valid sides to every argument, or that the truth always lies somewhere in the middle. As Yudkowsky puts it:

Do not think that fairness to all sides means balancing yourself evenly between positions; truth is not handed out in equal portions before the start of a debate.

And as Sagan adds:

Keeping an open mind is a virtue – but, as the space engineer James Oberg once said, not so open that your brains fall out. Of course we must be willing to change our minds when warranted by new evidence. But the evidence must be strong. Not all claims to knowledge have equal merit.

Mainstream news media outlets are often guilty of pretending otherwise – acting as though both sides of a debate are equal – for the sake of maintaining objectivity. But the important point here is that objectivity is not the same thing as neutrality; there can be cases where the facts simply support one side of a debate more than the other. If there’s a debate, for instance, over whether the president is a natural-born US citizen or whether he was born in Kenya, then clearly this is a case where one answer is simply right and the other is wrong – and acting like both sides have just as much validity doesn’t help anyone come any closer to understanding where the truth actually lies; if anything, it just muddies the waters further, which is the opposite of what good analysis should be doing. Noting that one side has all the facts and evidence in its favor, and that the other side doesn’t, may not be a neutral assessment of the debate – but it’s the only one that can rightly be called objective. So if intellectual objectivity is what you’re really after, you have to be willing to call a spade a spade rather than making false pretenses of equivalence.

Paul Krugman expresses his exasperation with media outlets that insist on putting neutrality above objectivity, even in crisis situations where one side is acting reasonably and the other isn’t:

Some of us have long complained about the cult of “balance,” the insistence on portraying both parties as equally wrong and equally at fault on any issue, never mind the facts. I joked long ago that if one party declared that the earth was flat, the headlines would read “Views Differ on Shape of Planet.”

[…]

The cult of balance has played an important role in bringing us to the edge of disaster. For when reporting on political disputes always implies that both sides are to blame, there is no penalty for extremism. Voters won’t punish you for outrageous behavior if all they ever hear is that both sides are at fault.

He continues:

And yes, I think this is a moral issue. The “both sides are at fault” people have to know better; if they refuse to say it, it’s out of some combination of fear and ego, of being unwilling to sacrifice their treasured pose of being above the fray.

It’s not necessarily a bad thing to want to be above the fray, of course; trying to maintain a dispassionate sense of judgment is an important part of being a good truth-seeker (more on that later). But if the debate at hand is over something really significant, then pretending that both sides of the debate are equal – insisting on false centrism just for its own sake – can be dangerously counterproductive. And in the most extreme circumstances, like if there’s a debate over whether it’s morally OK to enslave an entire race of people, or whether it should be legal to deny the vote to an entire gender, then neutrality is positively disastrous. As Archbishop Desmond Tutu puts it:

If you are neutral in situations of injustice, you have chosen the side of the oppressor. If an elephant has its foot on the tail of a mouse and you say that you are neutral, the mouse will not appreciate your neutrality.

Genuine malice may be rare, but where it does exist it’s important to confront it, in order to keep it from spreading unchecked and in order to deter other would-be wrongdoers.

Still, having said all this, it’s worth pointing out that even in such situations, some approaches are more productive than others. As Alexander explains:

Some people are so odious that an alarm needs to be spread. [It’s preferable] to err on the side of not doing that […] but sometimes the line will need to be crossed.

[…]

I think the most important consideration is that it be crossed in a way that doesn’t create a giant negative-sum war-of-all-against-all. That is, Democrats try to get Republicans fired for the crime of supporting Republicanism, Republicans try to get Democrats fired for the crime of supporting Democratism, and the end result is a lot of people getting fired but the overall Republican/Democrat balance staying unchanged.

That suggests a heuristic very much like Be Nice, At Least Until You Can Coordinate Meanness[…]: don’t try to destroy people in order to enforce social norms that only exist in your head. If people violate a real social norm, that the majority of the community agrees upon, and that they should have known – that’s one thing. If you idiosyncratically believe something is wrong, or you’re part of a subculure that believes something is wrong even though there are opposite subcultures that don’t agree – then trying to enforce your idiosyncratic rules by punishing anyone who breaks them is a bad idea.

And one corollary of this is that it shouldn’t be arbitrary. Ten million people tell sexist jokes every day. If you pick one of them, apply maximum punishment to him, and let the other 9.99 million off scot-free, he’s going to think it’s unfair – and he’ll be right. This is directly linked to the fact that there isn’t actually that much of a social norm against telling sexist jokes. My guess is that almost everyone who posts child pornography on Twitter gets in trouble for it, and that’s because there really is a strong anti-child pornography norm.

(this is also how I feel about the war on drugs. One in a thousand marijuana users gets arrested, partly because there isn’t enough political will to punish all marijuana users, partly because nobody really thinks marijuana use is that wrong. But this ends out unfair to the arrested marijuana user, not just because he’s in jail for the same thing a thousand other people did without consequence, but because he probably wouldn’t have done it he’d really expected to be punished, and society was giving him every reason to think he wouldn’t be.)

This set of norms is self-correcting: if someone does something you don’t like, but there’s not a social norm against it, then your next step should be to create a social norm against it. If you can convince 51% of the community that it’s wrong, then the community can unite against it and you can punish it next time. If you can’t convince 51% of the community that it’s wrong, then you should try harder, not play vigilante and try to enforce your unpopular rules yourself.

If you absolutely can’t tolerate something, but you also can’t manage to convince your community that it’s wrong and should be punished, you should work on finding methods that isolate you from the problem, including building a better community somewhere else. I think some of this collapses into a kind of Archipelago solution. Whatever the global norms may be, there ought to be communities catering to people who want more restrictions than normal, and other communities catering to people who want fewer. These communities should have really explicit rules, so that everybody knows what they’re getting into. People should be free to self-select into and out of those communities, and those self-selections should be honored. Safe spaces, 4chan, and this blog are three very different kinds of intentional communities with unusual but locally-well-defined free speech norms, they’re all good for the people who use them, and as long as they keep to themselves I don’t think outsiders have any right to criticize their existence.

Regardless of which angle you take, though, the crucial thing is just to be able to maintain enough perspective to recognize that not every issue is one of life and death, that not every issue is one of oppressors vs. victims, and that not every issue is one of fundamentally incompatible worldviews. Again, on the vast majority of topics, almost everyone is more or less on the same page; as Hank Green puts it, the challenge is usually “hard problems,” not “bad humans.” And even when disagreements arise, they aren’t typically irreconcilable rifts of earth-shattering consequence – they’re more a matter of trying to parse fine-grained specifics within the same general Overton Window. As Brennan writes:

It’s especially bizarre that mainstream political discussion is so heated and apocalyptic, given how little is at stake [in so many of the issues being discussed]. Republicans and Democrats disagree about many things, but in the logical space of possible political views they’re not merely in the same solar system but also on the same planet. They’re not debating deep existential questions about justice but instead surface disputes about the exact shape of the society they mutually accept. They’ve both agreed to buy the Camry; they’re now just debating whether to get the sport package or hybrid. Their disputes are [typically] tiny. Should we raise the top marginal income tax by 3 percentage points? Should we keep the minimum wage where it is or raise it by three dollars per hour? Should we pay $1 trillion a year for education or $1.2 trillion? Should employers be required to pay for birth control, or should women who work for closely held family corporations with fundamentalist owners have to pay ten to fifty dollars a month from their own pockets?

It’s one thing to be fighting for your life in an all-or-nothing conflict where the right side and the wrong side are unmistakably defined in absolute terms – like if you’re part of an ethnic minority trying to escape genocide or something. But such drastic circumstances are so rare that most Americans will never encounter them in their lifetimes; practically every contentious issue we encounter in everyday life – once you accurately recognize the positions that most people actually hold – turns out to be one of those more fine-grained disputes, where there are at least some ambiguous grey areas and the right answer isn’t necessarily obvious to everyone. And in such cases, a combative, absolutist approach just isn’t the right tool for the job. The best way to make progress in these areas is through collaborative truth-seeking – where the two sides with different perspectives put their heads together to compare ideas, help identify strong points and weak points in each other’s arguments, and jointly work their way a little closer to the broader truth. It’s not always necessary to agree with someone else’s worldview in this context – but being able to understand it, accurately and on its own terms, is a crucial step toward making legitimate long-term progress on the issue. As Ben Wave observes:

I find it more productive to try and cooperate with my opposites who like cooperation than to fight against my opposites who do not.

And commenter A7exrolance puts it this way:

Honestly, I hate this “me against them” mentality when discussing things like this. The highest form of discourse should be a mutual pursuit of truth. The use of antagonizing language (e.g. opponent) promotes the thinking that your objective should be to “win” an argument, which is how most people view heavy discussions which could otherwise serve to be more constructive. If people argued to learn instead of arguing to win, perhaps opposing sides as they are now would be able to come together and get closer to a resolvable truth.

To borrow an old piece of relationship advice, debate works best when it’s not “you vs. your opponent,” but rather “you and them vs. the problem.” The goal shouldn’t be to prove that you’re right and were right all along; the goal should simply be to find truth. Or as Ann Farmer puts it: “It isn’t about being right. It’s about getting it right.”

Ideally, “you” and “your opponent” shouldn’t even enter into the equation at all; as best you can, you should take your egos out of the picture entirely and just let the ideas be what compete against each other, not the people.

Of course, finding openings for this kind of adversarial collaboration can be hard in our current ideological environment, where seemingly everyone is arguing to win rather than arguing to learn – simply trying to display understanding rather than to gain understanding, as Galef puts it. So Alexander offers some guidelines that can be used to make debates more constructive:

Here’s what I think are minimum standards to deserve the [designation “Purely Logical Debate”]:

1. Debate where two people with opposing views are talking to each other (or writing, or IMing, or some form of bilateral communication). Not a pundit putting an article on Huffington Post and demanding Trump supporters read it. Not even a Trump supporter who comments on the article with a counterargument that the author will never read. Two people who have chosen to engage and to listen to one another.

2. Debate where both people want to be there, and have chosen to enter into the debate in the hopes of getting something productive out of it. So not something where someone posts a “HILLARY IS A CROOK” meme on Facebook, someone gets really angry and lists all the reasons Trump is an even bigger crook, and then the original poster gets angry and has to tell them why they’re wrong. Two people who have made it their business to come together at a certain time in order to compare opinions.

3. Debate conducted in the spirit of mutual respect and collaborative truth-seeking. Both people reject personal attacks or ‘gotcha’ style digs. Both people understand that the other person is around the same level of intelligence as they are and may have some useful things to say. Both people understand that they themselves might have some false beliefs that the other person will be able to correct for them. Both people go into the debate with the hope of convincing their opponent, but not completely rejecting the possibility that their opponent might convince them also.

4. Debate conducted outside of a high-pressure point-scoring environment. No audience cheering on both participants to respond as quickly and bitingly as possible. If it can’t be done online, at least do it with a smartphone around so you can open Wikipedia to resolve simple matters of fact.

5. Debate where both people agree on what’s being debated and try to stick to the subject at hand. None of this “I’m going to vote Trump because I think Clinton is corrupt” followed by “Yeah, but Reagan was even worse and that just proves you Republicans are hypocrites” followed by “We’re hypocrites? You Democrats claim to support women’s rights but you love Muslims who make women wear headscarves!” Whether or not it’s hypocritical to “support women’s rights” but “love Muslims”, it doesn’t seem like anyone is even trying to change each other’s mind about Clinton at this point.

These to me seem like the bare minimum conditions for a debate that could possibly be productive.

(See also Liam Rosen’s great guide to arguing constructively here.)

Upholding these standards isn’t always easy. Alexander continues:

The world is a scary place, full of bad people who want to hurt you, and in the state of nature you’re pretty much obligated to engage in whatever it takes to survive.

But instead of sticking with the state of nature, we have the ability to form communities built on mutual disarmament and mutual cooperation. Despite artificially limiting themselves, these communities become stronger than the less-scrupulous people outside them, because they can work together effectively and because they can boast a better quality of life that attracts their would-be enemies to join them. At least in the short term, these communities can resist races to the bottom and prevent the use of personally effective but negative-sum strategies.

One such community is the kind where members try to stick to rational discussion as much as possible. These communities are definitely better able to work together, because they have a powerful method of resolving empirical disputes. They’re definitely better quality of life, because you don’t have to deal with constant insult wars and personal attacks. And the existence of such communities provides positive externalities to the outside world, since they are better able to resolve difficult issues and find truth.

But forming a rationalist community isn’t just about having the will to discuss things well. It’s also about having the ability. Overcoming bias is really hard, and so the members of such a community need to be constantly trying to advance the art and figure out how to improve their discussion tactics.

That’s why another important part of fostering constructive discourse, in addition to upholding these standards yourself, is to hold your friends and allies accountable to them as well. Exerting a little positive peer pressure can go a long way toward keeping the more radical elements of your side from getting too unhinged and derailing the debate; and even a subtle shift in conversational norms can do wonders for creating an environment where ideas can be exchanged more freely. QualiaSoup and TheraminTrees illustrate this point with a personal story from their own lives:

One aspect of the culture at our secondary school was that students were expected to respond aggressively to insults, specifically insults to their families, especially their mothers. The insults didn’t need to be that imaginative. Often just the phrase “your mum” was enough. In response, students were expected to get angry and physically defend their mothers’ honor. If they didn’t, they were seen as weak and cowardly. This ritualized behavior continued for a couple of years; then there was an interesting shift. Aggressive reactions to insults stopped being admired. Students who were easily provoked came to be seen as weak and fragile, and were scorned for being hysterical. Shrugging off insults became the cool thing to do. The frequency of insults didn’t go down, but the frequency of aggressive responses plummeted.

[…]

In groups where dramatic expressions of offense are encouraged and rewarded, we can make two predictions. First, we can expect to see more expressions of offense. Second, we can expect to see them in response to increasingly smaller provocations, as individuals hunt for things to act offended about. But when overblown reactions of events receive scorn instead of sympathy, the emotional displays can fizzle out fast, exposing the fact that they’re unnecessary and well within personal control. If teenagers can learn to control their responses like this, shouldn’t we expect even more emotional maturity from adults?

The ability to resist your hostile impulses and actually work with your opponents can enable you to accomplish things that would be impossible within a strictly antagonistic dynamic; and if you’re able to get others to adopt a more collaborative mindset too, this effect is magnified all the more. It can seem galling at first to even consider the notion of cooperating with the enemy or compromising on sacred beliefs – but if both sides of a debate just take a breath and make a real effort to meet each other where they are, the results can sometimes be downright miraculous. As Pinker writes:

An ingenious rerouting of the psychology of taboo in the service of peace has recently been explored by Scott Atran, working with the psychologists Jeremy Ginges and Douglas Medin and the political scientist Khalil Shikaki. In theory, peace negotiations should take place within a framework of Market Pricing. A surplus is generated when adversaries lay down their arms – the so-called peace dividend – and the two sides get to yes by agreeing to divide it. Each side compromises on its maximalist demand in order to enjoy a portion of that surplus, which is greater than what they would end up with if they walked away from the table and had to pay the price of continuing conflict.

Unfortunately, the mindset of sacredness and taboo can confound the bestlaid plans of rational deal-makers. If a value is sacred in the minds of one of the antagonists, then it has infinite value, and may not be traded away for any other good, just as one may not sell one’s child for any other good. People inflamed by nationalist and religious fervor hold certain values sacred, such as sovereignty over hallowed ground or an acknowledgment of ancient atrocities. To compromise them for the sake of peace or prosperity is taboo. The very thought unmasks the thinker as a traitor, a quisling, a mercenary, a whore.

In a daring experiment, the researchers did not simply avail themselves of the usual convenience sample of a few dozen undergraduates who fill out questionnaires for beer money. They surveyed real players in the Israel-Palestine dispute: more than six hundred Jewish settlers in the West Bank, more than five hundred Palestinian refugees, and more than seven hundred Palestinian students, half of whom identified with Hamas or Palestinian Islamic Jihad. The team had no trouble finding fanatics within each group who treated their demands as sacred values. Almost half the Israeli settlers indicated that it would never be permissible for the Jewish people to give up part of the Land of Israel, including Judea and Samaria (which make up the West Bank), no matter how great the benefit. Among the Palestinians, more than half the students indicated that it was impermissible to compromise on sovereignty over Jerusalem, no matter how great the benefit, and 80 percent of the refugees held that no compromise was possible on the “right of return” of Palestinians to Israel.

The researchers divided each group into thirds and presented them with a hypothetical peace deal that required all sides to compromise on a sacred value. The deal was a two-state solution in which the Israelis would withdraw from 99 percent of the West Bank and Gaza but would not have to absorb Palestinian refugees. Not surprisingly, the proposal did not go over well. The absolutists on both sides reacted with anger and disgust and said that they would, if necessary, resort to violence to oppose the deal.

With a third of the participants, the deals were sweetened with cash compensation from the United States and the European Union, such as a billion dollars a year for a hundred years, or a guarantee that the people would live in peace and prosperity. With these sweeteners on the table, the nonabsolutists, as expected, softened their opposition a bit. But the absolutists, forced to contemplate a taboo tradeoff, were even more disgusted, angry, and prepared to resort to violence. So much for the rational-actor conception of human behavior when it comes to politico-religious conflict.

All this would be pretty depressing were it not for Tetlock’s observation that many ostensibly sacred values are really pseudo-sacred and may be compromised if a taboo tradeoff is cleverly reframed. In a third variation of the hypothetical peace deal, the two-state solution was augmented with a purely symbolic declaration by the enemy in which it compromised one of its sacred values. In the deal presented to the Israeli settlers, the Palestinians “would give up any claims to their right of return, which is sacred to them,” or “would be required to recognize the historic and legitimate right of the Jewish people to Eretz Israel.” In the deal presented to the Palestinians, Israel would “recognize the historic and legitimate right of the Palestinians to their own state and would apologize for all of the wrongs done to the Palestinian people,” or would “give up what they believe is their sacred right to the West Bank,” or would “symbolically recognize the historic legitimacy of the right of return” (while not actually granting it). The verbiage made a difference. Unlike the bribes of money or peace, the symbolic concession of a sacred value by the enemy, especially when it acknowledges a sacred value on one’s own side, reduced the absolutists’ anger, disgust, and willingness to endorse violence. The reductions did not shrink the absolutists’ numbers to a minority of their respective sides, but the proportions were large enough to have potentially reversed the outcomes of their recent national elections.

The implications of this manipulation of people’s moral psychology are profound. To find anything that softens the opposition of Israeli and Palestinian fanatics to what the rest of the world recognizes as the only viable solution to their conflict is something close to a miracle. The standard tools of diplomacy wonks, who treat the disputants as rational actors and try to manipulate the costs and benefits of a peace agreement, can backfire. Instead they must treat the disputants as moralistic actors, and manipulate the symbolic framing of the peace agreement, if they want a bit of daylight to open up. The human moral sense is not always an obstacle to peace, but it can be when the mindset of sacredness and taboo is allowed free rein. Only when that mindset is redeployed under the direction of rational goals will it yield an outcome that can truly be called moral.

In more general terms, constructive discourse isn’t just helpful for reconciling competing goals; if you’re really willing to put your heads together with people who have different outlooks from your own, it can sometimes lead both sides to explore new ground that you might not have even been aware of before. If you and your opponent are genuinely able to momentarily set aside the urgency of “winning the argument” and focus instead on figuring out where the truth actually lies, you can sometimes produce new insights working jointly which you might never have been able to come up with as individuals working on your own. As the old cliché goes, two heads are better than one – and if the two heads are approaching an issue from radically different angles, so much the better. McRaney elaborates:

According to the scientists who subscribe to [argumentation] theory, when we reason alone, that’s when we’re biased, that’s when we get ourselves into trouble, producing weak arguments that we believe are strong. It’s when we contribute our arguments to a pool, and then everyone together samples from that pool, and evaluates those arguments against one another, that reasoning can accomplish amazing things. It’s then that the poor arguments fail and the best arguments win. For instance, there’s this test in psychology called the Cognitive Reflection Task, which has these questions that people usually get wrong, like “If it takes five machines five minutes to make five widgets, how long would it take 100 machines to make 100 widgets?” Now, the answer to that is five minutes, but most people don’t get it right. “Each machine makes one widget per [five minutes]” is the solution. Now, reasoning alone, 83% of people who take that test under laboratory conditions answer at least one of the questions incorrectly. A third get all of them wrong. But in groups of three or more, no one gets any wrong. At least one member always sees the correct answer. And that person is often not very confident at first, but the resulting debate leads to the truth. Now, of course we have to take motivated reasoning into account here; people usually have a goal in mind when they’re arguing one thing or another. But when the goal is to be right, reasoning together often achieves that goal in a way that reasoning alone cannot. Cognitive scientist Tom Stafford says that when a group has developed a strong sense of trust and it faces a common goal, [then] when the majority is wrong, the few who are correct can bring the population around to the right answer. In fact, this is the whole idea behind argumentation theory.

The tricky part, of course, is developing that strong sense of trust and that sense that everybody shares a common goal. There are a few different ways this can be accomplished, though, depending on the situation. In some settings, for instance (like when a group is trying to work through a problem together as a team), it can change the whole dynamic just to have someone in the room who’s willing to set the tone, fearlessly going out on limbs and making bold statements, in order to show everyone else that it’s safe to do so without fearing backlash. As McRaney puts it:

Every team needs at least one asshole who doesn’t give a shit if he or she gets fired or exiled or excommunicated. For a group to make good decisions, they must allow dissent and convince everyone they are free to speak their mind without risk of punishment.

In other cases, it can be as straightforward as just making it an explicit rule that people won’t be judged or punished for weird or outrageous ideas. Fisher and Ury suggest that instead of framing the dialogue as a debate or an argument, it can be more productive to frame it as a freewheeling brainstorming session:

A brainstorming session is designed to produce as many ideas as possible to solve the problem at hand. The key ground rule is to postpone all criticism and evaluation of ideas. The group simply invents ideas without pausing to consider whether they are good or bad, realistic or unrealistic. With those inhibitions removed, one idea should stimulate another, like firecrackers setting off one another. In a brainstorming session, people need not fear looking foolish since wild ideas are explicitly encouraged.

And Yudkowsky echoes the benefits of refraining from judgment until even the most far-out ideas have been thoroughly explored. Citing Robyn Dawes, he elaborates:

From pp. 55-56 of Robyn Dawes’s Rational Choice in an Uncertain World. Bolding added.

Norman R. F. Maier noted that when a group faces a problem, the natural tendency of its members is to propose possible solutions as they begin to discuss the problem. Consequently, the group interaction focuses on the merits and problems of the proposed solutions, people become emotionally attached to the ones they have suggested, and superior solutions are not suggested. Maier enacted an edict to enhance group problem solving: “Do not propose solutions until the problem has been discussed as thoroughly as possible without suggesting any.” It is easy to show that this edict works in contexts where there are objectively defined good solutions to problems.

Maier devised the following “role playing” experiment to demonstrate his point. Three employees of differing ability work on an assembly line. They rotate among three jobs that require different levels of ability, because the most able – who is also the most dominant – is strongly motivated to avoid boredom. In contrast, the least able worker, aware that he does not perform the more difficult jobs as well as the other two, has agreed to rotation because of the dominance of his able co-worker. An “efficiency expert” notes that if the most able employee were given the most difficult task and the least able the least difficult, productivity could be improved by 20%, and the expert recommends that the employees stop rotating. The three employees and the a fourth person designated to play the role of foreman are asked to discuss the expert’s recommendation. Some role-playing groups are given Maier’s edict not to discuss solutions until having discussed the problem thoroughly, while others are not. Those who are not given the edict immediately begin to argue about the importance of productivity versus worker autonomy and the avoidance of boredom. Groups presented with the edict have a much higher probability of arriving at the solution that the two more able workers rotate, while the least able one sticks to the least demanding job – a solution that yields a 19% increase in productivity.

I have often used this edict with groups I have led – particularly when they face a very tough problem, which is when group members are most apt to propose solutions immediately. While I have no objective criterion on which to judge the quality of the problem solving of the groups, Maier’s edict appears to foster better solutions to problems.

This is so true it’s not even funny. And it gets worse and worse the tougher the problem becomes. Take Artificial Intelligence, for example. A surprising number of people I meet seem to know exactly how to build an Artificial General Intelligence, without, say, knowing how to build an optical character recognizer or a collaborative filtering system (much easier problems). And as for building an AI with a positive impact on the world – a Friendly AI, loosely speaking – why, that problem is so incredibly difficult that an actual majority resolve the whole issue within 15 seconds. Give me a break.

(Added: This problem is by no means unique to AI. Physicists encounter plenty of nonphysicists with their own theories of physics, economists get to hear lots of amazing new theories of economics. If you’re an evolutionary biologist, anyone you meet can instantly solve any open problem in your field, usually by postulating group selection. Et cetera.)

Maier’s advice echoes the principle of the bottom line, that the effectiveness of our decisions is determined only by whatever evidence and processing we did in first arriving at our decisions – after you write the bottom line, it is too late to write more reasons above. If you make your decision very early on, it will, in fact, be based on very little thought, no matter how many amazing arguments you come up with afterward.

And consider furthermore that We Change Our Minds Less Often Than We Think: 24 people assigned an average 66% probability to the future choice thought more probable, but only 1 in 24 actually chose the option thought less probable. Once you can guess what your answer will be, you have probably already decided. If you can guess your answer half a second after hearing the question, then you have half a second in which to be intelligent. It’s not a lot of time.

Traditional Rationality emphasizes falsification – the ability to relinquish an initial opinion when confronted by clear evidence against it. But once an idea gets into your head, it will probably require way too much evidence to get it out again. Worse, we don’t always have the luxury of overwhelming evidence.

I suspect that a more powerful (and more difficult) method is to hold off on thinking of an answer. To suspend, draw out, that tiny moment when we can’t yet guess what our answer will be; thus giving our intelligence a longer time in which to act.

Even half a minute would be an improvement over half a second.

As Yudkowsky illustrates, this technique of refraining from judgment until all possible answers have been explored isn’t just useful for making group dialogues more productive; it’s also a valuable tool for sharpening your own thinking as an individual – which, really, is even more fundamental. The best way to make the most of your exchanges with others, after all, is to first get your own thoughts straight; so ideally you should spend even more time honing your ideas in your own head than you spend bouncing them off other people. Knowing how to figure out what’s really true amidst all the noise is one of the most important skills a person can have; and the above technique of delaying judgment is only one method for avoiding mental traps – there are a lot more where that came from. So let’s now discuss a few of those.

XV.

Being aware of all the hallmarks of good discourse can make it much easier to recognize and avoid bad discourse – one of the first steps toward better thinking. If you can identify which sources of information are genuinely pursuing the truth, and which ones just seem more interested in posturing and arguing for their own sake, you can more readily filter out the “junk food” – i.e. those sources of ideological content that, while satisfying to consume, don’t actually teach you anything new or bring you any transformative insights. The most obvious offenders in this category, of course, are those that openly proclaim their lack of intellectual curiosity – the ones that pride themselves on avoiding deep deliberation because they consider it a distraction from fight on the ground. Dagny discusses her own experience in such circles:

Anti-intellectualism is a pill I swallowed, but it got caught in my throat, and that would eventually save me. It comes in a few forms. Activists in these circles often express disdain for theory because they take theoretical issues to be idle sudoku puzzles far removed from the real issues on the ground. This is what led one friend of mine to say, in anger and disbelief, “People’s lives aren’t some theoretical issue!” That same person also declared allegiance to a large number of theories about people’s lives, which reveals something important. Almost everything we do depends on one theoretical belief or another, which range from simple to complex and from implicit to explicit. A theoretical issue is just a general or fundamental question about something that we find important enough to think about. Theoretical issues include ethical issues, issues of political philosophy, and issues about the ontological status of gender, race, and disability. Ultimately, it’s hard to draw a clear line between theorizing and thinking in general. Disdain for thinking is ludicrous, and no one would ever express it if they knew that’s what they were doing.

Specifically on the radical leftist side of things, one problem created by this anti-theoretical bent is a lot of rhetoric and bluster, a lot of passionate railing against the world or some aspect of it, without a clear, detailed, concrete alternative. There was a common excuse for this. As an activist friend wrote in an email, “The present organization of society fatally impairs our ability to imagine meaningful alternatives. As such, constructive proposals will simply end up reproducing present relations.” This claim is couched in theoretical language, but it is a rationale for not theorizing about political alternatives. For a long time I accepted this rationale. Then I realized that mere opposition to the status quo wasn’t enough to distinguish us from nihilists. In the software industry, a hyped-up piece of software that never actually gets released is called “vapourware.” We should be wary of political vapourware. If somebody’s alternative to the status quo is nothing, or at least nothing very specific, then what are they even talking about? They are hawking political vapourware, giving a “sales pitch” for something that doesn’t even exist.

These kinds of attempts to win arguments through sheer conviction rather than actual content – to use a bunch of absolutist rhetoric as a substitute for putting forth real substance – are another big red flag to watch out for. In fact, it’s worth paying attention to people’s tone just in general; most of the time, the more sanctimonious a writer or commentator’s tone is – the more they act like they know it all and start all their arguments with “Let me educate you on this” – the more you should take what they say with a grain of salt, because it’s a good sign that they’re more interested in portraying themselves as an authority than they are in actually finding out what the reality of an issue is. It’s not necessarily that being sanctimonious makes people more wrong, mind you; after all, there are sanctimonious people on both sides of every issue, and they can’t all be wrong. It’s more that the people who tend to make genuinely thoughtful judgments – who are wary of oversimplification and try to avoid leaping to premature conclusions – are more likely to be humble and judicious when presenting their ideas. They’re more likely to explain why their positions are correct in a thorough, methodical way, knowing that if their ideas are strong enough they’ll be able to stand up on their own – whereas the people whose opinions are less practically grounded will be more likely to use grandstanding and embellishment to make their case seem more convincing. If you’ve spent any time on the internet or watched any cable news, you’ve no doubt seen plenty of this; browsing through old posts online, there’s no end to the self-proclaimed experts declaring with absolute certainty that hyperinflation will destroy the economy within the year, or that Hillary Clinton is a lock to win the presidency, or that Google is a fad that’s about to collapse (this site is a good example of the kind of tone you typically see). The recurring theme with all of them is that they regard their opinions as completely obvious, and they can’t even imagine how anyone could assert the opposing view with a straight face; anyone who does so, in their estimation, must be either dishonest or braindead. Their level of confidence in these assertions is absolute – and of course, all too often it has no correlation whatsoever with how accurate the assertions actually turn out to be in the end.

It might be easy to laugh at these people’s certitude after they’re proven monumentally wrong; but being able to recognize it before the fact is trickier. You have to watch out for the kinds of persuasive techniques they use, because if you’re not careful they can creep into your judgment and distort your own level of confidence in your beliefs without you realizing it. If the only people you listen to are the ones who are constantly asserting with absolute confidence that your side is the correct one, you’re likely to start developing a false sense of certainty yourself. So as commenter stupidestpuppy advises, you should be mindful of your own mental state when you come across ideas that feel particularly satisfying to indulge:

I feel like a good way to approach news and politics is to be extra skeptical about anything that makes you happy, angry, or smug. Because you want to have those emotions you’re more willing to accept arguments that are illogical or not backed up by facts. There are too many people who accept things to be true because they want them to be true.

It’s easy to fall into the trap of becoming unduly confident in your own beliefs just because they feel so right that they simply have to be true. If a particular idea has an incredibly cohesive internal logic to it, and seems to beautifully integrate all the pieces of the puzzle into one simple explanation – if it’s the kind of idea that makes you think “My God, this explains everything” in a sudden rush of clarity – the sheer elegance of its explanatory power can be so powerful that it can turn things like, say, actual real-world evidence and factual substantiation into mere afterthoughts. Of course, the fact that an explanation is incredibly compelling doesn’t constitute even the slightest bit of evidence that it’s actually true. As H.L. Mencken famously said:

There is always a well-known solution to every human problem – neat, plausible, and wrong.

And in theory, this seems easy to accept. But in practice, it’s harder to adhere to. If you’ve got an explanation that you really like, your subconscious impulse will be to resist any counterargument that might force you to relinquish it. Because ideas that are comfortable and satisfying are easier to accept than ideas that are uncomfortable and inconvenient, you’re more likely to treat them as true, whether they actually are or not. It’s the whole motivated reasoning thing again.

But motivated reasoning and false certainty are dead ends. To borrow an example from Joseph Romm, think about what you would do in a situation where the stakes really were a matter of life and death – like if you thought that you (or your child) might have contracted a life-threatening disease. Would you try to find the one doctor out of 100 who was willing to tell you there was no problem and you had nothing to worry about? Or would you try to find the doctor who was the most accurate, and who always gave correct diagnoses even if they were upsetting to hear? If the truth actually matters – if there are real consequences at stake – then you can’t just insist that your preferred conclusions are the ones you’re going to believe. You have to be willing to consider the ugly, unsatisfying possibilities as well – because those might be the ones that are actually true. You have to insist that your map actually match the territory, because otherwise you may find yourself somewhere you don’t want to be.

The real key to getting at the truth is to resist the urge to think like a lawyer – only looking for good ideas on your own side and only looking for bad ideas on the opposing side – and instead to think like a scientist – looking for both good ideas that you can use and bad ideas that you can shoot down on both sides. If the ideas you favor really are the strongest ones available, then they’ll be able to withstand even the harshest scrutiny – but the only way to find out if that’s the case is to actually run them through that gauntlet. As McRaney notes, it’s this approach which has always produced the best results throughout history:

Your natural tendency is to start from a conclusion and work backward to confirm your assumptions, but the scientific method drives down the wrong side of the road and tries to disconfirm your assumptions. A couple of centuries back people began to catch on to the fact that looking for disconfirming evidence was a better way to conduct research than proceeding from common belief. They saw that eliminating suspicions caused the outline of the truth to emerge. Once your forefathers and foremothers realized that this approach generated results, in a few generations your species went from burning witches and drinking mercury to mapping the human genome and playing golf on the moon.

Actively trying to disprove your own beliefs – particularly ones that you feel strongly about – can feel wrong and unnatural, like you’re going against everything you believe in (because after all, that is technically what you’re doing). But if you’re able to scrutinize the arguments from your own side and look for bad ideas to shoot down just as critically as you would if those arguments were coming from the opposing side, you can often find flaws in your ideology that you might never have noticed otherwise – and you can make corrections and improvements that you might otherwise have overlooked. Taking a tough-love approach to your own worldview is a way of strengthening it, not weakening it. If you’re your own harshest critic, then you don’t have to be told you’re wrong. And conversely, if you’re able to view the opposing side’s arguments charitably and with an open mind, you can often discover new ideas that can be integrated into your worldview to strengthen it further still. It doesn’t necessarily mean you have to fully buy into a worldview you disagree with – you can “try on” different worldviews and explore their implications without converting fully to the beliefs you’re exploring. But this ability to try on different worldviews and dispassionately follow them to their logical conclusions – even ones that you’d vehemently object to in your usual mode of judgment – is an invaluable skill, which can allow you to see things and make connections no one else would notice. As Aristotle is claimed to have said: “It is the mark of an educated mind to be able to entertain a thought without accepting it.”

(Nerst also provides a good illustration of the point in this post; the more worldviews you can add to your conceptual toolkit – keeping them handy to deploy whenever they might be useful – the better.)

Here’s a good method for accomplishing this that you can try for yourself: The next time you’ve got an idea you’re really infatuated with, don’t just consider whether it’s true or false – try outright assuming that it’s false and see where that leads you. Like, if there were a genie who could grant you the power to know the whole truth of the universe, and it turned out (to your great shock) that your idea was unambiguously false, and your opponents’ ideas were unambiguously true, then how would you explain that? What possible explanations could you come up with that might be plausible?

This technique, of pre-emptively taking it for granted that you’re utterly wrong and then trying to figure out explanations for why, can be a much more effective way of finding the cracks in your ideology than the more traditional approach, as Tetlock and Gardner point out:

Researchers have found that merely asking people to assume their initial judgment is wrong, to seriously consider why that might be, and then make another judgment, produces a second estimate which, when combined with the first, improves accuracy almost as much as getting a second estimate from another person.

And it can be effectively applied to all kinds of situations, from boardrooms to marriages, as Alexander adds:

There’s a rationalist tradition […] that before you get married, you ask all your friends to imagine that the marriage failed and tell you why. I guess if you just asked people “Will our marriage fail?” everyone would say no, either out of optimism or social desirability bias. If you ask “Assume our marriage failed and tell us why”, you’ll actually hear people’s concerns. I think this is the same principle.

Gary Klein explains the benefits of this technique for group projects (and even gives it a catchy name):

Projects fail at a spectacular rate. One reason is that too many people are reluctant to speak up about their reservations during the all-important planning phase. By making it safe for dissenters who are knowledgeable about the undertaking and worried about its weaknesses to speak up, you can improve a project’s chances of success.

Research conducted in 1989 by Deborah J. Mitchell, of the Wharton School; Jay Russo, of Cornell; and Nancy Pennington, of the University of Colorado, found that prospective hindsight – imagining that an event has already occurred – increases the ability to correctly identify reasons for future outcomes by 30%. We have used prospective hindsight to devise a method called a premortem, which helps project teams identify risks at the outset.

A premortem is the hypothetical opposite of a postmortem. A postmortem in a medical setting allows health professionals and the family to learn what caused a patient’s death. Everyone benefits except, of course, the patient. A premortem in a business setting comes at the beginning of a project rather than the end, so that the project can be improved rather than autopsied. Unlike a typical critiquing session, in which project team members are asked what might go wrong, the premortem operates on the assumption that the “patient” has died, and so asks what did go wrong. The team members’ task is to generate plausible reasons for the project’s failure.

A typical premortem begins after the team has been briefed on the plan. The leader starts the exercise by informing everyone that the project has failed spectacularly. Over the next few minutes those in the room independently write down every reason they can think of for the failure – especially the kinds of things they ordinarily wouldn’t mention as potential problems, for fear of being impolitic.

[…]

Next the leader asks each team member, starting with the project manager, to read one reason from his or her list; everyone states a different reason until all have been recorded. After the session is over, the project manager reviews the list, looking for ways to strengthen the plan.

[…]

Although many project teams engage in prelaunch risk analysis, the premortem’s prospective hindsight approach offers benefits that other methods don’t. Indeed, the premortem doesn’t just help teams to identify potential problems early on. It also reduces the kind of damn-the-torpedoes attitude often assumed by people who are overinvested in a project. Moreover, in describing weaknesses that no one else has mentioned, team members feel valued for their intelligence and experience, and others learn from them. The exercise also sensitizes the team to pick up early signs of trouble once the project gets under way. In the end, a premortem may be the best way to circumvent any need for a painful postmortem.

Ultimately, being able to find the faults in your own ideas just comes down to being able to put yourself in the right mindset. If you’re treating an idea like something you need to promote and protect, allowing yourself to admit its flaws won’t come as easily – but if you remove this mental need to protect your idea by taking it for granted that it’s already failed, then you can free up your mind from these subconscious mental constraints that you’ve placed on it, and accordingly uncover mistakes that you might not have been allowing yourself to see before.

There are other good tricks to get yourself into a more open frame of mind as well. For instance, if you notice yourself feeling less receptive to opposing ideas than you’d like to be – like, say, if you’re a moderate liberal who wants to better understand Hayek’s arguments against government intervention in the market, but finds it hard to overcome your reflexive knee-jerk revulsion toward anything coming from the conservative side – it can be helpful to imagine that you’re reading the material to someone who’s even further down the ideological scale from you than you are from the material itself (i.e. an ultra-left communist or something). If you can imagine that you’re trying to find arguments to persuade that hypothetical extremist to adopt a more moderate position, you may find yourself feeling more receptive to conservative ideas (at least the good ones) in general. Similarly, if you’re (say) a conservative trying to better understand liberal arguments for feminism, you might imagine what kind of mindset you’d take toward someone who was significantly more conservative than you regarding gender roles – who believed that women should be completely subservient to men in every way, for instance. In so doing, you may find yourself feeling more open than usual to good feminist arguments that could be useful in a hypothetical debate with such a person.

Another technique, recommended by Luke Muehlhauser, is to override your feeling of certainty that you already have all the answers – and instead get curious about the things you don’t know – by importing that feeling of curious uncertainty from other contexts where you’ve already experienced it:

Step 1: Feel that you don’t already know the answer.

If you have beliefs about the matter already, push the “reset” button and erase that part of your map. You must feel that you don’t already know the answer.

Exercise 1.1: Import the feeling of uncertainty.

  1. Think of a question you clearly don’t know the answer to. When will AI be created? Is my current diet limiting my cognitive abilities? Is it harder to become the Prime Minister of Britain or the President of France?
  2. Close your eyes and pay attention to how that blank spot on your map feels. (To me, it feels like I can see a silhouette of someone in the darkness ahead, but I wouldn’t take bets on who it is, and I expect to be surprised by their identity when I get close enough to see them.)
  3. Hang on to that feeling or image of uncertainty and think about the thing you’re trying to get curious about. If your old certainty creeps back, switch to thinking about who composed the Voynich manuscript again, then import that feeling of uncertainty into the thing you’re trying to get curious about, again.

Exercise 1.2: Consider all the things you’ve been confident but wrong about.

  1. Think of things you once believed but were wrong about. The more similar those beliefs are to the beliefs you’re now considering, the better.
  2. Meditate on the frequency of your errors, and on the depths of your biases (if you know enough cognitive psychology).

Step 2: Want to know the answer.

Now, you must want to fill in this blank part of your map.

You mustn’t wish it to remain blank due to apathy or fear. Don’t avoid getting the answer because you might learn you should eat less pizza and more half-sticks of butter. Curiosity seeks to annihilate itself.

You also mustn’t let your desire that your inquiry have a certain answer block you from discovering how the world actually is. You must want your map to resemble the territory, whatever the territory looks like. This enables you to change things more effectively than if you falsely believed that the world was already the way you want it to be.

Exercise 2.1: Visualize the consequences of being wrong.

  1. Generate hypotheses about the ways the world may be. Maybe you should eat less gluten and more vegetables? Maybe a high-protein diet plus some nootropics would boost your IQ 5 points? Maybe your diet is fairly optimal for cognitive function already?
  2. Next, visualize the consequences of being wrong, including the consequences of remaining ignorant. Visualize the consequences of performing 10 IQ points below your potential because you were too lazy to investigate, or because you were strongly motivated to justify your preference for a particular theory of nutrition. Visualize the consequences of screwing up your neurology by taking nootropics you feel excited about but that often cause harm to people with cognitive architectures similar to your own.

Exercise 2.2: Make plans for different worlds.

  1. Generate hypotheses about the way the world could be – different worlds you might be living in. Maybe you live in a world where you’d improve your cognitive function by taking nootropics, or maybe you live in a world where the nootropics would harm you.
  2. Make plans for what you’ll do if you happen to live in World #1, what you’ll do if you happen to live in World #2, etc. (For unpleasant possible worlds, this also gives you an opportunity to leave a line of retreat for yourself.)
  3. Notice that these plans are different. This should produce in you some curiosity about which world you actually live in, so that you can make plans appropriate for the world you do live in rather than for one of the worlds you don’t live in.

Exercise 2.3: Recite the Litany of Tarski.

The Litany of Tarski can be adapted to any question. If you’re considering whether the sky is blue, the Litany of Tarski is:

If the sky is blue
I desire to believe the sky is blue.
If the sky is not blue,
I desire not to believe the sky is blue.

Exercise 2.4: Recite the Litany of Gendlin.

The Litany of Gendlin reminds us:

What is true is already so.
Owning up to it doesn’t make it worse.
Not being open about it doesn’t make it go away.
And because it’s true, it is what is there to be interacted with.
Anything untrue isn’t there to be lived.
People can stand what is true,
for they are already enduring it.

Step 3: Sprint headlong into reality.

If you’ve made yourself uncertain and then curious, you’re now in a position to use argument, empiricism, and scholarship to sprint headlong into reality.

Again, it all comes down to your frame of mind. Embracing the prospect of your own wrongness is hard. You have to train your mind and your emotions the same way a martial arts master trains their body. But as Muehlhauser continues:

[Someone who was really interested in truth] would not flinch away from experiences that might destroy their beliefs. They would train their emotions to fit the facts.

Like the Litany of Tarski says, if there’s something you’re wrong about, you should want to know that. If there’s some flaw in your ideology or some area where your beliefs can be improved, you should want to know where those flaws are and how those improvements can be made, so that you can upgrade your worldview to one that’s more accurate. If you’re on the losing side of an argument, then you should want to lose your arguments – because the alternative is to go on believing something that isn’t true, and to set yourself up for embarrassment or disaster later on. Again, if your beliefs are true, then you can subject them to all the harshest scrutiny imaginable, and they will still emerge even stronger than before. Truth has nothing to fear from honest inquiry. But if it’s actually your opponents’ beliefs that are true, then refusing to allow yourself to discover that fact is nothing but an act of pure self-sabotage. It might feel like acknowledging that you’ve been proven wrong is doing your opponents a favor – but by simply admitting it, allowing yourself to embrace a new truth, and growing stronger in your understanding of the world, you’re actually doing yourself the favor. As Pinker writes:

“Sunlight is the best disinfectant,” according to Justice Louis Brandeis’s famous case for freedom of thought and expression. If an idea really is false, only by examining it openly can we determine that it is false. At that point we will be in a better position to convince others that it is false than if we had let it fester in private, since our very avoidance of the issue serves as a tacit acknowledgment that it may be true. And if an idea is true, we had better accommodate our moral sensibilities to it, since no good can come from sanctifying a delusion. This might even be easier than the ideaphobes fear. The moral order did not collapse when the earth was shown not to be at the center of the solar system, and so it will survive other revisions of our understanding of how the world works.

Two thousand years ago, Marcus Aurelius wrote about the importance of being able “to hear unwelcome truths.” This is what he was talking about. Occasionally you’ll encounter an idea that makes you uncomfortable to think about – you won’t be able to say where it’s wrong, exactly, but if you were to accept it as true, it’s not clear how you’d be able to preserve the conclusions you want to maintain – so your natural inclination will be to just stop focusing on it altogether, and spare yourself the mental discomfort. But this mental discomfort – or cognitive dissonance, as it’s called – is a sign that you should be paying more attention to the idea at hand, because it suggests that your current beliefs might not actually be as airtight as you thought. After all, in a situation where you knew your case was completely irrefutable – like if you were arguing with someone over whether the moon was made of cheese – you wouldn’t feel any mental discomfort or dissonance, because it’d be obvious that their ideas were a nothing but a bunch of silly nonsense that you could just laugh off. The fact that you are feeling some dissonance means that you’re not in one of those situations – there might actually be some kernel of truth buried in the midst of those uncomfortable ideas – and if there is, then you need to have the fortitude to be able to dig it out – because ultimately, the difference between what’s true and what’s not true matters, and it’s important to get it right.

This ability to be “good at thinking of uncomfortable thoughts,” as Yudkowsky puts it – to notice when you’re feeling that cognitive dissonance and be willing take it head on rather than suppressing it or flinching away from it – is what enables you to learn and become stronger in your beliefs. Refusing to recognize your own internal experiences of confusion and dissonance simply means that you’re denying yourself a critical mental mechanism that can subconsciously tip you off when you acquire a belief that’s false. If you can learn the recognize the various gradations of doubt and uncertainty associated with each of your beliefs – if you can cultivate the ability to notice that “quiet strain in the back of your mind” when your explanations for your beliefs start to “feel a little forced” (to quote Yudkowsky again) – then you can exploit that to your advantage by focusing in on it the way a detective focuses in on a clue. You can switch your mindset from one of false certainty to one of genuine curiosity. You can give yourself permission to explore the topic thoroughly enough to find the correct answers – rather than just pretending to have them all figured out already – and in so doing, you can actually resolve your feelings of confusion and uncertainty rather than merely suppressing them.

That sense of curiosity is the key. As Yudkowsky writes, you should try to approach every question with a mindset of wanting to explore counterarguments, not out of a grudging sense that it’s your intellectual duty to do so, but because you’re genuinely curious to find out where the truth lies:

Consider what happens to you, on a psychological level, if you begin by saying: “It is my duty to criticize my own beliefs.” Roger Zelazny once distinguished between “wanting to be an author” versus “wanting to write.” Mark Twain said: “A classic is something that everyone wants to have read and no one wants to read.” Criticizing yourself from a sense of duty leaves you wanting to have investigated, so that you’ll be able to say afterward that your faith is not blind. This is not the same as wanting to investigate.

This can lead to motivated stopping of your investigation. You consider an objection, then a counterargument to that objection, then you stop there. You repeat this with several objections, until you feel that you have done your duty to investigate, and then you stop there. You have achieved your underlying psychological objective: to get rid of the cognitive dissonance that would result from thinking of yourself as a rationalist and yet knowing that you had not tried to criticize your belief. You might call it purchase of rationalist satisfaction – trying to create a “warm glow” of discharged duty.

Afterward, your stated probability level will be high enough to justify your keeping the plans and beliefs you started with, but not so high as to evoke incredulity from yourself or other rationalists.

When you’re really curious, you’ll gravitate to inquiries that seem most promising of producing shifts in belief, or inquiries that are least like the ones you’ve tried before. Afterward, your probability distribution likely should not look like it did when you started out – shifts should have occurred, whether up or down; and either direction is equally fine to you, if you’re genuinely curious.

Contrast this to the subconscious motive of keeping your inquiry on familiar ground, so that you can get your investigation over with quickly, so that you can have investigated, and restore the familiar balance on which your familiar old plans and beliefs are based.

[…]

In the microprocess of inquiry, your belief should always be evenly poised to shift in either direction. Not every point may suffice to blow the issue wide open – to shift belief from 70% to 30% probability – but if your current belief is 70%, you should be as ready to drop it to 69% as raise it to 71%. You should not think that you know which direction it will go in (on average), because by the laws of probability theory, if you know your destination, you are already there. If you can investigate honestly, so that each new point really does have equal potential to shift belief upward or downward, this may help to keep you interested or even curious about the microprocess of inquiry.

[…]

There just isn’t any good substitute for genuine curiosity. A burning itch to know is higher than a solemn vow to pursue truth.

Unfortunately, people don’t generally like to approach questions in this way – because admitting that they could learn something new or improve their beliefs in some way would mean admitting that there might be gaps in their knowledge in the first place. Admitting to ignorance, no matter how partial or minor, often feels like admitting to stupidity. But it doesn’t have to feel this way. After all, as Simler points out, someone who hasn’t invested any of their ego in a particular belief “experiences no anguish in letting go of [that belief if it turns out to be false] and adopting a better one, even its opposite. In fact, it’s a pleasure. If I believe that my daughter’s soccer game starts at 6pm, but my neighbor informs me that it’s 5pm, I won’t begrudge his correction – I’ll be downright grateful.” In that kind of situation, there’s no sense that anyone’s self-image should feel threatened by being wrong – it’s just a matter of being mistaken – and so correcting the false belief is no big deal at all. The only reason why other beliefs feel different is because we’ve invested our ego in them, so we feel like we’re losing face by getting them wrong. But like I said, there’s no reason why this has to be the case – because being wrong really isn’t the same thing as being stupid, as Tavris and Aronson explain:

It’s another form of [Shimon] Peres’s [dictum that “when a friend makes a mistake, the friend remains a friend, and the mistake remains a mistake”]: Articulate the cognitions and keep them separate. “When I, a decent, smart person, make a mistake, I remain a decent, smart person and the mistake remains a mistake. Now, how do I remedy what I did?”

So embedded is the link between mistakes and stupidity in American culture that it can be shocking to learn that not all cultures share the same phobia about them. In the 1970s, psychologists Harold Stevenson and James Stigler became interested in the math gap in performance between Asian and American schoolchildren: By the fifth grade, the lowest-scoring Japanese classroom was outperforming the highest-scoring American classroom. To find out why, Stevenson and Stigler spent the next decade comparing elementary classrooms in the U.S., China. and Japan. Their epiphany occurred as they watched a Japanese boy struggle with the assignment of drawing cubes in three dimensions on the blackboard. The boy kept at it for forty-five minutes, making repeated mistakes, as Stevenson and Stigler became increasingly anxious and embarrassed for him. Yet the boy himself was utterly unselfconscious, and the American observers wondered why they felt worse than he did. “Our culture exacts a great cost psychologically for making a mistake,” Stigler recalled, “whereas in Japan, it doesn’t seem to be that way. In Japan, mistakes, error, confusion [are] all just a natural part of the learning process.” (The boy eventually mastered the problem, to the cheers of his classmates.) The researchers also found that American parents, teachers, and children were far more likely than their Japanese and Chinese counterparts to believe that mathematical ability is innate; if you have it, you don’t have to work hard, and if you don’t have it, there’s no point in trying. In contrast, most Asians regard math success, like achievement in any other domain, as a matter of persistence and plain hard work. Of course you will make mistakes as you go along; that’s how you learn and improve. It doesn’t mean you are stupid.

Making mistakes is central to the education of budding scientists and artists of all kinds, who must have the freedom to experiment, try this idea, flop, try another idea, take a risk, be willing to get the wrong answer. One classic example, once taught to American schoolchildren and still on many inspirational Web sites in various versions, is Thomas Edison’s reply to his assistant (or to a reporter), who was lamenting Edison’s ten thousand experimental failures in his effort to create the first incandescent light bulb. “I have not failed,” he told the assistant (or reporter). “I successfully discovered 10,000 elements that don’t work.” Most American children, however, are denied the freedom to noodle around, experiment, and be wrong in ten ways, let alone ten thousand. The focus on constant testing, which grew out of the reasonable desire to measure and standardize children’s accomplishments, has intensified their fear of failure. It is certainly important for children to learn to succeed; but it is just as important for them to learn not to fear failure. When children or adults fear failure, they fear risk. They can’t afford to be wrong.

There is another powerful reason that American children fear being wrong: They worry that making mistakes reflects on their inherent abilities. In twenty years of research with American schoolchildren, psychologist Carol Dweck has pinpointed one of the major reasons for the cultural differences that Stevenson and Stigler observed. In her experiments, some children are praised for their efforts in mastering a new challenge. Others are praised for their intelligence and ability, the kind of thing many parents say when their children do well: “You’re a natural math whiz, Johnny.” Yet these simple messages to children have profoundly different consequences. Children who, like their Asian counterparts, are praised for their efforts, even when they don’t “get it” at first, eventually perform better and like what they are learning more than children praised for their natural abilities. They are also more likely to regard mistakes and criticism as useful information that will help them improve. In contrast, children praised for their natural ability learn to care more about how competent they look to others than about what they are actually learning. They become defensive about not doing well or about making mistakes, and this sets them up for a self-defeating cycle: If they don’t do well, then to resolve the ensuing dissonance (“I’m smart and yet I screwed up”), they simply lose interest in what they are learning or studying (“I could do it if I wanted to, but I don’t want to”). When these kids grow up, they will be the kind of adults who are afraid of making mistakes or taking responsibility for them, because that would be evidence that they are not naturally smart after all.

Dweck has found that these different approaches toward learning and the meaning of mistakes – are they evidence that you are stupid or evidence that you can improve? – are not ingrained personality traits. They are attitudes, and, as such, they can change. Dweck has been changing her students’ attitudes toward learning and error for years, and her intervention is surprisingly simple: She teaches elementary-school children and college students alike that intelligence is not a fixed, inborn trait, like eye color, but rather a skill, like bike riding, that can be honed by hard work. This lesson is often stunning to American kids who have been hearing for years that intelligence is innate. When they accept Dweck’s message, their motivation increases, they get better grades, they enjoy their studies more, and they don’t beat themselves up when they have setbacks.

The moral of our story is easy to say, and difficult to execute. When you screw up, try saying this: “I made a mistake. I need to understand what went wrong. I don’t want to make the same mistake again.” Dweck’s research is heartening because it suggests that at all ages, people can learn to see mistakes not as terrible personal failings to be denied or justified, but as inevitable aspects of life that help us grow, and grow up.

And Ryan Holiday echoes these insights:

Too often, convinced of our own intelligence or success, we stay in a comfort zone that ensures that we never feel stupid (and are never challenged to learn or reconsider what we know). It obscures from view various weaknesses in our understanding, until eventually it’s too late to change course. This is where the silent toll is taken.

Each of us faces a threat as we pursue our craft. Like sirens on the rocks, ego sings a soothing, validating song  –  which can lead to a wreck. The second we let the ego tell us we have graduated, learning grinds to a halt. That’s why UFC champion and MMA pioneer Frank Shamrock said, “Always stay a student.” As in, it never ends.

The solution is as straightforward as it is initially uncomfortable: Pick up a book on a topic you know next to nothing about. Put yourself in rooms where you’re the least knowledgeable person. That uncomfortable feeling, that defensiveness that you feel when your most deeply held assumptions are challenged  –  what about subjecting yourself to it deliberately? Change your mind. Change your surroundings.

An amateur is defensive. The professional finds learning (and even, occasionally, being shown up) to be enjoyable; they like being challenged and humbled, and engage in education as an ongoing and endless process.

Larry Ellison recalls a conversation he once had with Bill Gates which exemplified this mentality perfectly:

It was the most interesting conversation I’ve ever had with Bill, and the most revealing. It was around eleven o’clock in the morning, and we were on the phone discussing some technical issue, I don’t remember what it was. Anyway, I didn’t agree with him on some point, and I explained my reasoning. Bill says, “I’ll have to think about that, I’ll call you back.” Then I get this call at four in the afternoon and it’s Bill continuing the conversation with “Yeah, I think you’re right about that, but what about A and B and C?” I said, “Bill, have you been thinking about this for the last five hours?” He said, yes, he had, it was an important issue and he wanted to get it right. Now Bill wanted to continue the discussion and analyze the implications of it all. I was just stunned. He had taken time and effort to think it all through and had decided I was right and he was wrong. Now, most people hate to admit they’re wrong, but it didn’t bother Bill one bit. All he cared about was what was right, not who was right. That’s what makes Bill very, very dangerous.

The truth is, there’s no need to feel defensive about the gaps in your knowledge – because everyone has gaps in their knowledge. Everyone has things that they think they’re right about but are actually wrong about; and everyone has things that just completely confuse them. Simply recognizing that this is the case – that there’s always room to improve your beliefs – can put you in a mindset that’s much more conducive to doing so. Manson’s advice here is:

Hold weaker opinions. Recognize that unless you are an expert in a field, there is a good chance that your intuitions or assumptions are flat-out wrong. The simple act of telling yourself (and others) before you speak, “I could be wrong about this,” immediately puts your mind in a place of openness and curiosity. It implies an ability to learn and to have a closer connection to reality.

In other words, the better you are at maintaining intellectual humility, the more room you’ll have for intellectual growth. As Chris Voss adds:

We must let what we know […] guide us but not blind us to what we do not know; we must remain flexible and adaptable to any situation; we must always retain a beginner’s mind.

And Sam Harris drives the point home:

Wherever we look, we find otherwise sane men and women making extraordinary efforts to avoid changing their minds.

Of course, many people are reluctant to be seen changing their minds, even though they might be willing to change them in private, seemingly on their own terms – perhaps while reading a book. This fear of losing face is a sign of fundamental confusion. Here it is useful to take the audience’s perspective: Tenaciously clinging to your beliefs past the point where their falsity has been clearly demonstrated does not make you look good. We have all witnessed men and women of great reputation embarrass themselves in this way. I know at least one eminent scholar who wouldn’t admit to any trouble on his side of a debate stage were he to be suddenly engulfed in flames.

If the facts are not on your side, or your argument is flawed, any attempt to save face is to lose it twice over. And yet many of us find this lesson hard to learn. To the extent that we can learn it, we acquire a superpower of sorts. In fact, a person who surrenders immediately when shown to be in error will appear not to have lost the argument at all. Rather, he will merely afford others the pleasure of having educated him on certain points.

The superpower analogy is a good one; research has shown that one of the key features of so-called “superforecasters” – i.e. people who are significantly better than average at recognizing trends, predicting future events, and just generally being right about things – is that they are good at incorporating new information into their worldviews and changing their minds as the facts dictate. Tetlock explains:

They tend to be more actively open-minded. They tend to treat their beliefs not as sacred possessions to be guarded but rather as testable hypotheses to be discarded when the evidence mounts against them. That’s [one] way in which they differ from many people. They try not to have too many ideological sacred cows. They’re willing to move fairly quickly in response to changing circumstances.

And this is a key point – not just that these superforecasters are open to changing their minds, but that they’re eager to do so, even when it means sacrificing one of their most central beliefs. By updating their beliefs more quickly, they spend less time being wrong and more time being right. Yudkowsky shares his thoughts on the matter:

I just finished reading a history of Enron’s downfall, The Smartest Guys in the Room, which hereby wins my award for “Least Appropriate Book Title”.

An unsurprising feature of Enron’s slow rot and abrupt collapse was that the executive players never admitted to having made a large mistake. When catastrophe #247 grew to such an extent that it required an actual policy change, they would say “Too bad that didn’t work out – it was such a good idea – how are we going to hide the problem on our balance sheet?” As opposed to, “It now seems obvious in retrospect that it was a mistake from the beginning.” As opposed to, “I’ve been stupid.” There was never a watershed moment, a moment of humbling realization, of acknowledging a fundamental problem. After the bankruptcy, Jeff Skilling, the former COO and brief CEO of Enron, declined his own lawyers’ advice to take the Fifth Amendment; he testified before Congress that Enron had been a great company.

Not every change is an improvement, but every improvement is necessarily a change. If we only admit small local errors, we will only make small local changes. The motivation for a big change comes from acknowledging a big mistake.

As a child I was raised on equal parts science and science fiction, and from Heinlein to Feynman I learned the tropes of Traditional Rationality: Theories must be bold and expose themselves to falsification; be willing to commit the heroic sacrifice of giving up your own ideas when confronted with contrary evidence; play nice in your arguments; try not to deceive yourself; and other fuzzy verbalisms.

A traditional rationalist upbringing tries to produce arguers who will concede to contrary evidence eventually – there should be some mountain of evidence sufficient to move you. This is not trivial; it distinguishes science from religion. But there is less focus on speed, on giving up the fight as quickly as possible, integrating evidence efficiently so that it only takes a minimum of contrary evidence to destroy your cherished belief.

I was raised in Traditional Rationality, and thought myself quite the rationalist. I switched to Bayescraft (Laplace/Jaynes/Tversky/Kahneman) in the aftermath of… well, it’s a long story. Roughly, I switched because I realized that Traditional Rationality’s fuzzy verbal tropes had been insufficient to prevent me from making a large mistake.

After I had finally and fully admitted my mistake, I looked back upon the path that had led me to my Awful Realization. And I saw that I had made a series of small concessions, minimal concessions, grudgingly conceding each millimeter of ground, realizing as little as possible of my mistake on each occasion, admitting failure only in small tolerable nibbles. I could have moved so much faster, I realized, if I had simply screamed “OOPS!”

And I thought: I must raise the level of my game.

There is a powerful advantage to admitting you have made a large mistake. It’s painful. It can also change your whole life.

It is important to have the watershed moment, the moment of humbling realization. To acknowledge a fundamental problem, not divide it into palatable bite-size mistakes.

Do not indulge in drama and become proud of admitting [how ignorant and flawed you are, and how prone to committing] errors. It is surely superior to get it right the first time. But if you do make an error, better by far to see it all at once. Even hedonically, it is better to take one large loss than many small ones. The alternative is stretching out the battle with yourself over years. The alternative is Enron.

Since then I have watched others making their own series of minimal concessions, grudgingly conceding each millimeter of ground; never confessing a global mistake where a local one will do; always learning as little as possible from each error. What they could fix in one fell swoop voluntarily, they transform into tiny local patches they must be argued into. Never do they say, after confessing one mistake, I’ve been a fool. They do their best to minimize their embarrassment by saying I was right in principle, or It could have worked, or I still want to embrace the true essence of whatever-I’m-attached-to. Defending their pride in this passing moment, they ensure they will again make the same mistake, and again need to defend their pride.

Better to swallow the entire bitter pill in one terrible gulp.

He sums up:

[One of the core virtues of rationality] is lightness. Let the winds of evidence blow you about as though you are a leaf, with no direction of your own. Beware lest you fight a rearguard retreat against the evidence, grudgingly conceding each foot of ground only when forced, feeling cheated. Surrender to the truth as quickly as you can. Do this the instant you realize what you are resisting; the instant you can see from which quarter the winds of evidence are blowing against you. Be faithless to your cause and betray it to a stronger enemy.

Now, obviously this doesn’t mean that you should just be constantly changing your views at the drop of a hat. If you’ve accumulated mountains of evidence in support of an idea over the course of several years, then just encountering one new piece of evidence against it shouldn’t automatically be a deal-breaker for that idea. You should weigh your beliefs in proportion to the overall balance of evidence. What it does mean, though, is that when that balance of evidence shifts in favor of a new idea, you shouldn’t spend a second longer clinging to the old flawed one than you have to – even if it’s a big one (in fact, especially if it’s a big one).

Of course, there’s a reason why the biggest beliefs are the hardest ones to let go of. If you’ve got a belief that’s so central to your thinking that it’s practically a cornerstone of your identity, changing it can feel like giving up who you are as a person. Not only that, but these kinds of foundational beliefs are often deeply tied into things like social identity and status – so if you’re a lifelong churchgoer who suddenly stops believing, for instance, you have to worry about the fallout that will inevitably come with no longer being part of that community. Or if you’re part of a social circle whose members are all liberals and you become a conservative, then good luck dealing with all the backlash on Facebook. Beliefs don’t just exist in a vacuum; there are often all kinds of tribal implications that accompany them, and the prospect of facing social marginalization, ridicule, or ostracism for your beliefs can be daunting.

But there are ways of dealing with this, too. As McRaney points out, one of the best ways to ease the process of making major ideological shifts is to have more than one tribe that you belong to. So for instance, if you happen to identify as a libertarian, you might also identify as a Catholic, a feminist, a transhumanist, a Harry Potter fan, a jazz lover, or any number of other affiliations. By spreading your identity across multiple domains in this way, you won’t feel as much pressure to conform precisely to the groupthink of one particular tribe – because even if you face pushback from one group for diverging from its orthodoxy, you’ll know that your alignment with that group only makes up one facet of your identity – it doesn’t define the entirety of who you are – and you’ll still have the safety net of being able to participate fully in the other communities you’re part of. By having a wide variety of ideological (and non-ideological) interests, you’ll be able to operate more freely in your beliefs, because you’ll have other ways of defining your identity aside from your membership in one particular tribe. That’s why McRaney’s advice is to move in as many circles as possible; the less you pigeonhole yourself, the freer you are to expand your horizons.

(Incidentally, this is the same reason why it’s easier to keep young people out of gangs if they’re also members of a sports team or a youth group or whatever – just giving them an alternative context in which to define their social standing and identity can prevent them from wanting to define themselves solely through their membership in a gang. It’s the same reason why someone who’s in a toxic relationship can have an easier time leaving if they’ve recently started spending time with a new group of friends. And it’s also a good reason why, if you encounter someone with a particularly repugnant or unpopular belief and you want to change their mind about it, you’re more likely to succeed by befriending them and welcoming them into your tribe than by trying to shame them and marginalize them even further than they already are. If you want to get someone to abandon their ideology, you have to show them that there’s an alternative ideology that they can feel like they belong to instead. Or as Stephanie Lepp puts it: “If you’re going to ask someone to jump ship, you have to give them a better ship to jump to; otherwise, what’s the incentive?”)

Another approach, of course, is just to reject the whole concept of having a tribal-based identity altogether, and be comfortable having your own unique beliefs as an individual rather than trying to define yourself in terms of which ideological tribes you’re part of. As Jacob Falkovich writes:

Paul Graham […] recommends “keeping your identity small.” If one self-identifies as ‘progressive’ or ‘anti-progressive,’ any dispute over policy and science on which an official ‘progressive position’ develops can become a threat to one’s identity. […] Labels of any sort are a detriment to clear thinking. In the absence of a position forced on someone by their identity, a person is free to choose a position based on logic and the available evidence.

And Graham himself elaborates:

I finally realized today why politics and religion yield such uniquely useless discussions.

As a rule, any mention of religion on an online forum degenerates into a religious argument. Why? Why does this happen with religion and not with Javascript or baking or other topics people talk about on forums?

[…]

I think what religion and politics have in common is that they become part of people’s identity, and people can never have a fruitful argument about something that’s part of their identity. By definition they’re partisan.

Which topics engage people’s identity depends on the people, not the topic. For example, a discussion about a battle that included citizens of one or more of the countries involved would probably degenerate into a political argument. But a discussion today about a battle that took place in the Bronze Age probably wouldn’t. No one would know what side to be on. So it’s not politics that’s the source of the trouble, but identity. When people say a discussion has degenerated into a religious war, what they really mean is that it has started to be driven mostly by people’s identities.

Because the point at which this happens depends on the people rather than the topic, it’s a mistake to conclude that because a question tends to provoke religious wars, it must have no answer. For example, the question of the relative merits of programming languages often degenerates into a religious war, because so many programmers identify as X programmers or Y programmers. This sometimes leads people to conclude the question must be unanswerable – that all languages are equally good. Obviously that’s false: anything else people make can be well or badly designed; why should this be uniquely impossible for programming languages? And indeed, you can have a fruitful discussion about the relative merits of programming languages, so long as you exclude people who respond from identity.

More generally, you can have a fruitful discussion about a topic only if it doesn’t engage the identities of any of the participants. What makes politics and religion such minefields is that they engage so many people’s identities. But you could in principle have a useful conversation about them with some people. And there are other topics that might seem harmless, like the relative merits of Ford and Chevy pickup trucks, that you couldn’t safely talk about with others.

The most intriguing thing about this theory, if it’s right, is that it explains not merely which kinds of discussions to avoid, but how to have better ideas. If people can’t think clearly about anything that has become part of their identity, then all other things being equal, the best plan is to let as few things into your identity as possible.

Most people reading this will already be fairly tolerant. But there is a step beyond thinking of yourself as x but tolerating y: not even to consider yourself an x. The more labels you have for yourself, the dumber they make you.

The idea here is that you should strive to be “post-partisan;” that is, you shouldn’t care which side an idea comes from, or whose narrative it supports – all you should care about is whether it’s true. You shouldn’t decide in advance that there are certain conclusions you have to reach before you’ve even examined all the facts; you should figure out what all the facts are first, and then your conclusions should follow from them. Ideally, specific beliefs shouldn’t be something that you even really consider to be “yours” at all (at least not in any kind of permanent, identity-defining sense); you should simply regard them as things that you happen to be holding because they’re the best ones available at the moment, but which you could swap out for better ones at any time. And in fact, at the most fundamental level, you shouldn’t even consider your beliefs to be something that you can choose for yourself in the first place. If you’re doing it right, then your worldview should simply be a condition imposed upon you by the facts of the world; and when you encounter new facts, your worldview should helplessly change to accommodate them, regardless of whether they contradict what you might prefer to be true. In Thomas Jefferson’s words: “We [must] not [be] afraid to follow the truth wherever it may lead.”

This mentality, of finding out what the facts are and then accepting whatever truths they point to, will often lead to combinations of beliefs that don’t fit neatly under one particular ideological label. But truth is non-denominational – it doesn’t constrain itself to one particular side 100% of the time – so why should you? There’s no law of nature that says you absolutely have to adhere fully to one of the pre-constructed ideologies that have already been defined by others. You can have your own set of beliefs that combines good ideas from a variety of sources and integrates them into a unique worldview. This doesn’t mean that you can’t still identify as a liberal or a conservative or a Christian or a Muslim or whatever if your beliefs still happen to coincide with most of those ideologies’ central doctrines; but it’s not an all-or-nothing thing. You don’t have to just pick one pre-assembled worldview from the menu. You can choose the buffet, so to speak, and assemble your own. The most important thing is just that whatever labels you might adopt, your core ideological identity above all others shouldn’t be “liberal” or “conservative” or “Christian” or “Muslim” or anything like that, but simply “seeker of truth.”

Being able to break out of the tribalist mentality and evaluate ideas solely for their truth value – and in a broader sense, being able to avoid all the pitfalls of motivated reasoning in general – means that you have to be able at times to mentally put a little distance between yourself and the topic, to evaluate things from a more detached “outside view” and decouple your emotional investments from your intellectual judgments. As Gregory Hays writes:

The discipline of perception requires that we maintain absolute objectivity of thought: that we see things dispassionately for what they are.

And Simler and Hanson elaborate:

An ideal political Do-Right will be the opposite of an ideologue. Because Do-Rights are concerned only with achieving the best outcomes for society, they won’t shy away from contrary arguments and evidence. In fact, they’ll welcome fresh perspectives (with an appropriately critical attitude, of course). When a smart person disagrees with them, they’ll listen with an open mind. And when, on occasion, they actually change one of their political beliefs, they’re apt to be grateful rather than resentful. Their pride might take a small hit, but they’ll swallow it for the sake of the greater good. Think of an effective business leader, actively seeking out different perspectives in order to make the best decisions – that’s how a Do-Right would consume political information.

As we’ve been discussing, though, this is easier said than done. They continue:

But of course, that’s not at all how real voters behave. Most of us live quite happily in our political echo chambers, returning again and again to news sources that support what we already believe. When contrary opinions occasionally manage to filter through, we’re extremely critical of them, although we’re often willing to swallow even the most specious evidence that confirms our views. And we’re more likely to engage in political shouting matches, full of self-righteous confidence, than to listen with the humility that we may (gasp!) be wrong.

The fact that we attach strong emotions to our political beliefs is another clue that we’re being less than fully honest intellectually. When we take a pragmatic, outcome-oriented stance to a given domain, we tend to react more dispassionately to new information. We do this every day in most areas of our lives, like when we buy groceries, pack for a vacation, or plan a birthday party. In these practical domains, we feel much less pride in what we believe, anger when our beliefs are challenged, or shame in changing our minds in response to new information. However, when our beliefs serve non-pragmatic functions, emotions tend to be useful to protect them from criticism.

Yes, the stakes may be high in politics, but even that doesn’t excuse our social emotions. High-stakes situations might reasonably bring out stress and fear, but not pride, shame, and anger. During a national emergency, for example, we hope that our leaders won’t be embarrassed to change their minds when new information comes to light. People are similarly cool and dispassionate when discussing existential risks like global pandemics and asteroid impacts – at least insofar as those risks are politically neutral. When talk turns to politicized risks like global climate change, however, our passions quickly return.

All of this strongly suggests that we hold political beliefs for reasons other than accurately informing our decisions.

Unfortunately, as Mark Hill points out, the incentive structures of most ideological debates nowadays are largely designed to inflame their participants’ emotions as much as possible rather than inhibiting them:

There appears to be a horrible process that works like this:

A. In order to want to learn more about political issues, you must be enthusiastic about politics;

B. Enthusiasm about politics means you are more likely to be emotionally invested in the issues;

C. Emotional investment in the issues means a more negative attitude toward anyone who disagrees;

D. A negative attitude toward someone means being more dismissive of his point of view and being less open to changing your mind based on anything he says.

In the world of psychology, they call this attitude polarization; the more times the average person spends thinking about a subject, the more extreme his position becomes — even if he doesn’t run across any new information. Simply repeating your beliefs to yourself makes those beliefs stronger.

And it gets even worse when we wind up in a group — say, on an Internet message board full of people who agree with us, where we can all congratulate each other on being right. Researchers call that group polarization (in public — in private, they call it a “circle jerk”).

Of course, once you get to the point where you’re rooting so hard for one side of an issue that you’re just short of painting your chest in team colors, then all that time spent reading up on the issues stops being about becoming an informed citizen and becomes more about accumulating ammunition for the next argument.

It’s understandable that this happens so often. After all, when it comes to areas like politics and religion, the issues at hand are often ones that affect millions of people, and can even be matters of life and death. When the stakes are that high, how can you not get emotionally invested?

The key point here, though, is that there’s a difference between getting emotionally invested in an important issue, and getting emotionally invested in the specific arguments and beliefs you hold concerning that issue. A lot of people conflate the two, and think that if they’re passionate about their terminal values – life, liberty, justice, security, equality, etc. – then they should be equally staunch in their beliefs about how best to achieve those goals. But in fact, it’s the other way around; if you’re really committed to these ultimate goals, then you shouldn’t particularly care which policy provides the best means of achieving them, just that they get achieved in the best way possible. Your ideology should just be an instrument for accomplishing what you really care about – a means to an end – not something to defend for its own sake. Do school vouchers actually produce better educational results? Maybe so, maybe not; but either way you should be willing to embrace the answer, because the quality of the education should be what you care about, not the means. Does socialized medicine provide better health outcomes at a more efficient cost? Maybe so, maybe not; but you should gladly embrace whichever answer is true because you shouldn’t be emotionally invested in the privatization or socialization of medicine as an ends in itself, you should be emotionally invested in healthcare that’s effective and affordable. What about gun control? Drug legalization? It’s the same in each of these cases. If you’ve been advocating for policy X because you believe it’s the best way to achieve a particular goal, but then you discover that policy Y would actually be a more effective way of achieving it, you should be happy to drop policy X in a heartbeat, because all that should really matter to you is figuring out the best way to achieve the goal. Being passionate about worthy causes is a good thing – the last thing we need in the world is more apathetic cynics – but you should make sure that the things you’re getting emotionally invested in are your terminal values, not your individual object-level arguments for fulfilling them. For those arguments, dispassion is the key.

There are sometimes relatively easy ways to detect that your emotions might be affecting your judgment on a particular topic. Like if you notice yourself bristling at the very thought of the other side – if even the mere mention of the word “Obamacare” or “pro-life” or “atheist” or “Trump” is enough to make your blood start to boil – then you should be aware that your judgment on the matter might obviously be a bit biased. (This doesn’t necessarily mean that your bias is unwarranted or wrong, mind you – but it does mean that you’ll have to adjust for it if you don’t want to miss whatever insights the other side might actually have.) Similarly, if there’s a controversy in the news and you find yourself wanting to leap to one side’s defense before you even know all the details, it’s a good sign that your judgment has been at least partly compromised by emotional considerations.

It’s not always that easy to detect your own biases, though. All too often, you can think you’re being completely objective and dispassionate in your judgments, when really you’re unknowingly engaged in motivated reasoning. Even so, it’s possible to outsmart your own biases and turn the fear of being wrong to your advantage, by intentionally manipulating your own incentives to compel pure truth-seeking. If you can raise the stakes for being wrong to such a degree that they outweigh all your other considerations (like social signaling, emotional catharsis, etc.), you can minimize your self-deception and make it so that you have no choice but to be brutally honest with yourself about what you really believe (as opposed to what you’ve merely convinced yourself you believe, or what you merely wish were true). One way to do this is to put yourself into situations where it’s actually made explicit that factual accuracy is the only measure by which you’re being judged – not winning the argument or scoring points for your side, but just being able to assess reality as objectively as possible and make the most accurate projections based on that assessment. Tetlock proposes this in the form of organized competitions, implemented on a national scale:

I want to dedicate the last part of my career to improving the quality of public debate. And I see forecasting tournaments as a tool that can be used for that purpose. I believe that if partisans in debates felt that they were participating in forecasting tournaments in which their accuracy could be compared against that of their competitors, we would quite quickly observe the depolarization of many polarized political debates. People would become more circumspect, more thoughtful and I think that would on balance be a better thing for our society and for the world. So I think there are some tangible things in which the forecasting technology can be used to improve the technology of public debate, if only we were open to the possibility.

But there’s no reason why these kinds of forecasting competitions have to just be limited to formal, organized events. Simply having informal contests with your peers, and establishing norms of discourse in which success is defined solely by factual accuracy, can shift your collective mindset into a more dispassionate and constructive one. If everyone in the group stops thinking they can win points merely by preaching to the choir or antagonizing the other side – if the only way of winning prestige is to have the most objective and accurate view of the world – then they’ll be less inclined to waste their efforts on self-indulgent gestures, and more inclined to do their homework and figure out where the truth really lies.

If you’re feeling particularly competitive, another way of rigging your own incentives to minimize self-deception (which might not be practical for everyone but which is worth mentioning here anyway just for illustrative purposes) is to put your money where your mouth is – i.e. to place literal bets on the propositions you’re arguing over. Just risking your intellectual reputation on an argument is one thing; even if you’re wrong, you can often still rationalize and make excuses for yourself. But putting yourself at risk of an actual, tangible financial loss – even if it’s just a small one – can have a certain way of sharpening your thinking and forcing you to be more honest with yourself about whether you really fully believe what you’re saying, or whether you’re just exaggerating your arguments for effect. “I’m 100% sure” can quickly turn into “Eh, maybe there’s some margin for error” if someone suddenly challenges you to put your hard-earned cash on the line. (As Alex Tabarrok puts it: “A bet is a tax on bullshit.”) And a lot of times, you’ll find that you don’t even have to make the bet at all; just considering what you would do if someone did challenge you to put money on it can make you realize that you aren’t quite as confident in your argument as you thought you were. So if you believed very strongly that, say, the president’s new economic plan would be such a disaster that it would lead to a financial crash within the next five years, or that legalizing gay marriage would lead to the end of straight marriage, or whatever other contentious view you wanted to argue, you could simply ask yourself what kind of odds you would hypothetically be willing to lay on those predictions if you were actually forced to do so. Would you be willing to offer 3-to-1 odds (i.e. you’d lose three times more if you were wrong than you’d win if you were right)? What about 100-to-1 odds? 1000-to-1? (Alternatively, you could try this other creative method called de Finetti’s Game.) You wouldn’t ever have to actually make these bets, of course; but just thinking about the issues in terms of confidence ratios – weighing your certainty against your uncertainty and putting a percentage on your confidence level rather than just a straight “yes” or “no” – can give you a much more nuanced understanding of things.

And this is the real bottom line here that all these techniques and thought exercises are designed to serve. If you allow yourself to get swept up in the kind of absolutist thinking that dominates so much of modern discourse – reducing every issue to an oversimplified black-and-white binary – you’ll end up missing all the nuances that make the issue a debate in the first place. It may feel more satisfying on a gut level to claim 100% certainty in your beliefs – and it may score you more cheap points on social media to frame every issue in absolutist terms. But being able to think probabilistically – to never presume 100% certainty on any issue, but instead to assign different degrees of probability to each of your beliefs – is far more likely to give you an accurate worldview. It’s true that black and white areas do exist. There are some things that really are absolute. But the point is that they aren’t the only areas that exist – there are all kinds of grey areas in between – and unless you can train yourself to think in greyscale, rather than thinking exclusively in black and white, you’ll never be able to understand the whole picture.

(See Nate Soares’s insightful post on the subject here.)

Speaking from personal experience, I can say that thinking probabilistically (when I actually manage to do it) has enabled me to more readily accept when my opponents make good points – because instead of feeling forced to say something like “That’s a good point; I thought X was a great idea but now I realize that it’s a terrible idea” (which is pretty difficult, to make such dramatic 180-degree reversals of position instantaneously), I can say something more like “That’s a good point; I still mostly think favorably of X but now I’ve adjusted my position a few percentage points in the other direction, and if I continue to encounter more good evidence against it I’ll adjust my percentages even more.” That way, I can acknowledge the strength of their point while still accounting for the fact that all the prior evidence I’ve accumulated over the years still weighs mostly in X’s favor on balance.  And this has been a huge help for me when it comes to navigating these kinds of conversations.

But as it turns out, this isn’t just a one-way benefit. Adopting this kind of probabilistic approach doesn’t just make me more receptive to other people’s good ideas; it also has the fortunate side effect of increasing receptiveness in the other direction as well, leading to a more productive conversation all around. As Kathryn Schulz writes:

If being contradicted or facing other people’s categorical pronouncements tends to make listeners stubborn, defensive, and inclined to disagree, open expressions of uncertainty can be remarkably disarming. To take a trivial but common example, I once sat in on a graduate seminar in which a student prefaced a remark by saying, “I might be going out on a limb here.” Before that moment, the class had been contentious; the prevailing ethos seemed to be a kind of academic one-upmanship, in which the point was to undermine all previous observations. After this student’s comment, though, the room seemed to relax. Because she took it upon herself to acknowledge the provisionality of her idea, her classmates were able to contemplate its potential merit instead of rushing to invalidate it.

These kinds of disarming, self-deprecating comments (“this could be wrong, but…” “maybe I’m off the mark here…”) are generally considered more typical of the speech patterns of women than men. Not coincidentally, they are often criticized as overly timid and self-sabotaging. But I’m not sure that’s the whole story. Awareness of one’s own qualms, attention to contradiction, acceptance of the possibility of error: these strike me as signs of sophisticated thinking, far preferable in many contexts to the confident bulldozer of unmodified assertions. Philip Tetlock, too, defends these and similar speech patterns (or rather, the mental habits they reflect), describing them, admiringly, as “self-subversive thinking.” That is, they let us function as our own intellectual sparring partner, thereby honing – or puncturing – our beliefs. They also help us do greater justice to complex topics and make it possible to have riskier thoughts. At the same time, by moving away from decree and toward inquiry, they set the stage for more open and interesting conversations. Perhaps the most striking and paradoxical effect of the graduate student’s out-on-a-limb caveat was that, even as it presented her idea as potentially erroneous, it caused her classmates to take that idea more seriously: it inspired her listeners to actually listen.

Ultimately, it’s not hard to understand why the student’s classmates reacted as they did. Despite the common assumption that speaking with more certainty is always the best way to appear more credible, there are real limits to this approach. Sure, it might work for an audience that thinks the topic being debated is a simple one that should have a clear and unambiguous answer. But if the topic is a complex one with a lot of nuances, then the person with the most credibility will actually tend to be the one who recognizes and acknowledges those nuances, not the person who acts as though they don’t exist and the answer is obvious. Most people may not be aware of this dynamic at a conscious level when making their own arguments (hence the ubiquity of people exaggerating their level of certainty on seemingly every issue), but I think they often do realize it, if only subconsciously, when they hear the arguments of others. And I think they’re right to do so – because when it comes to the most complex issues, those with more nuanced opinions really do tend to have a better track record of actually understanding them accurately.

André Gide famously wrote:

Trust those who seek the truth, but doubt those who say they have found it.

If someone claims 100% certainty in their worldview – if they say that they know all they need to know – then it’s a good bet that their worldview is a grossly oversimplified one that doesn’t accurately reflect reality. It’s not that they’ve actually learned all they need to know; it’s just that they’ve chosen to stop learning. The real truth is that nobody knows everything (or even most things) – it’s not even physically possible – and one of the defining features of intellectual maturity is the ability to recognize this. If you’ve got certain beliefs that you’ve researched and found to be strongly supported by the best available evidence, then sure, you can assign a high level of confidence to those beliefs. (Yes, the moon really is made of rock and not cheese.) But if there are things that you haven’t quite learned enough about yet to justify having a confident opinion, then you should be honest with yourself about that; you shouldn’t just assert a confident opinion anyway for the sake of appearing more authoritative. A lot of people seem to think that they have to have a decisive opinion on every issue, and that if they don’t, they’ll look ignorant. But not every issue has an immediately knowable answer. If ask you how many fingers I’m holding up behind my back, you won’t look more knowledgeable if you assert in complete seriousness that you know for a fact that I’m holding up four fingers – you’ll just look like a buffoon. The correct answer to questions like that is “I don’t know.” So if you find yourself in a situation where all the facts aren’t in yet and the answers aren’t clear, you shouldn’t just pick one prematurely and decide that that’s the answer you’re going to go with; you should be willing to suspend judgment until you know all that you need to know, and only then make your conclusion about where the best evidence is pointing. You don’t just have to answer “definitely yes” or “definitely no” to every question. You can answer things like “I’d say about 70% yes” or “The evidence doesn’t seem conclusive one way or the other to me just yet” – and those can be more useful answers than if you’d just uncritically planted your flag in one side or the other.

Now, this doesn’t mean that you should use this as an excuse to avoid a particular topic because you might not like the conclusion. You’ll sometimes notice, for instance, that people do this when it comes to scientific questions where the answer might contradict their worldview. Instead of rolling up their sleeves and digging into the research, they’ll just say “Look, I’m not a scientist; I don’t have enough expertise to know all the technical details I’d need to answer this question for sure.” It’s good that they’re admitting their intellectual blind spots, of course, but what’s not so good is when they continue to leave those questions unanswered on purpose, intentionally neglecting to address those blind spots so that they can avoid reaching a conclusion that they don’t want to reach. They’ll rationalize that as long as the question is still open, there’s still room for their preferred conclusion to potentially be true – much like the person who refuses to go to the doctor because not knowing the state of their health allows them to continue believing that they’re perfectly healthy – and so they’ll just go right on avoiding the truth. They use the phrase “I don’t know” as an avoidance mechanism.

But “I don’t know” shouldn’t be an indication of which questions you want to avoid; it should be an indication of which questions you need to delve deeper into. If you can’t quite figure out what the right answer is, you should want to get down to the bottom of the mystery, not avoid it. As Jostein Gaarder says:

A philosopher knows that in reality he knows very little. That is why he constantly strives to achieve true insight. Socrates was one of these rare people. He knew that he knew nothing about life and the world. And now comes the important part: it troubled him that he knew so little.

A philosopher is therefore someone who recognizes that there is a lot he does not understand, and is troubled by it.

And as Alexander describes it:

I don’t know how the internal experience of curiosity works for other people, but to me it’s a sort of itch I get when the pieces don’t fit together and I need to pick at them until they do. I’ve talked to some actual scientists who have this way stronger than I do. An intellectually curious person is a heat-seeking missile programmed to seek out failures in existing epistemic paradigms.

You should always want to expand the scope of your knowledge; and whenever you say “I don’t know,” you should always want to follow it up with “…yet.” That’s what becoming more knowledgeable means – it’s a never-ending process of, as Tetlock and Gardner put it, “gradually getting closer to the truth by constantly updating [your beliefs] in proportion to the weight of the evidence.” It’s not always easy – sometimes you have to make dramatic shifts in your thinking, and sometimes you even have to give up core beliefs that you’ve spent years becoming deeply invested in – but you can never improve your worldview unless you’re willing to change it; and you can never make big improvements to your worldview unless you’re willing to make big changes. Again, it all comes back to taking your ego out of the equation. If you can do that – if you can simply learn to respond to good counterarguments with statements like “Oh yeah, my bad, I actually think you’re right about that point” – then you can be a lot more nonchalant about effortlessly changing your views when appropriate, and it won’t feel like a big deal, either to yourself or to your discussion partners. After all, it’s not that you’re having to painfully admit that you were wrong and therefore stupid; it’s simply that you’re taking a belief that was perfectly reasonable given the information available to you at the time, and updating it to an even more accurate view now that you have access to new information. (For some reason, calling it “updating” seems to make it easier than calling it “changing your mind.”) You’ve been following the optimal epistemic process the whole time, so why should you be ashamed like you’ve done something wrong? If anything, you should be proud of your open-mindedness. (As Cowen suggests, you might imagine yourself as “the best person in the world at listening to advice.”) If you can learn to pride yourself on your ability to update your knowledge in this way – if you can train yourself, as Sam Bowman writes, “to internalise the virtue of open-mindedness so that changing your mind makes you feel just as good as being ideologically consistent once did” – then that’s a good first step to becoming wiser on a whole other level than you were before.

Peter Watts provides a parable:

We climbed this hill. Each step up we could see farther, so of course we kept going. Now we’re at the top. […] And we look out across the plain and we see this other tribe dancing around above the clouds, even higher than we are. Maybe it’s a mirage, maybe it’s a trick. Or maybe they just climbed a higher peak we can’t see because the clouds are blocking the view. So we head off to find out – but every step takes us downhill. No matter what direction we head, we can’t move off our peak without losing our vantage point. So we climb back up again. We’re trapped on a local maximum.

But what if there is a higher peak out there, way across the plain? The only way to get there is to bite the bullet, come down off our foothill and trudge along the riverbed until we finally start going uphill again. And it’s only then you realize: Hey, this mountain reaches way higher than that foothill we were on before, and we can see so much better from up here.

But you can’t get there unless you leave behind all the tools that made you so successful in the first place. You have to take that first step downhill.

XVI.

There’s one other thing to consider when it comes to maintaining your intellectual humility and not slipping into toxic argumentativeness, which is the question of whether it’s better to explore your ideas and opinions publicly or to do so privately. (This relates back to our earlier discussion of signaling behaviors – and we also touched on it briefly with Alexander’s guidelines for logical debates – but it’s worth focusing in on more explicitly here.)

It seems like all our worst tribal tendencies and absolutist patterns of behavior tend to become more pronounced in settings where the debate is taking place in front of an audience of some kind (e.g. social media, TV talk shows, opinion pages, etc.). If you’re just having a private one-on-one conversation with somebody, the tone is generally more civil and constructive; there’s more give-and-take between the two sides, there’s more of a genuine attempt to understand each other, and both sides typically feel freer to make concessions and change their views as needed. As soon as the debate enters the public eye, though – as soon as you bring a bunch of judgmental onlookers into the equation – the tone shifts. All of a sudden, there’s a kind of performative factor that’s introduced, and both sides instinctively start digging in and behaving in a way that’s more geared toward winning the argument and looking smarter than the other person. The signaling incentives start to outweigh the truth-seeking incentives.

Again, I think this is one of the main reasons why the popular discourse seems to have taken such a dramatic turn for the worse in recent years. With the rise of social media, YouTube, and the blogosphere, the majority of ideological debates are now (maybe for the first time ever) taking place in front of an audience – and accordingly, they’re more likely to produce overwrought grandstanding and finger-pointing than nuanced exchanges of ideas. Adam Kotsko shares his thoughts on the subject:

The internet [has become] a tool for judgment rather than dialogue.

[…]

Every status update and comment [on a social media platform like Facebook is] staged as a mini “hot or not” contest: Either you press “like” or pass over it in silence. [And] it turns out that making Facebook do anything but “hot or not” is hard. For instance, some people try to make it into a platform for sharing interesting links. But in practice, that effort devolves into the activity of passing judgment on those links – and more important, inviting others to pass that same judgment and thereby express approval of us. And in fact, this system of judgment is impossible to opt out of, because Facebook does not allow you to turn off comments or the numerical displays of “likes” and other emotional reactions. Facebook makes us all into the fictionalized [Mark] Zuckerberg [as depicted in the film The Social Network], awaiting a positive judgment from the chain of judgments we set off – or, more precisely, a positive assessment in the form of that chain. What is most “hot” turns out to be what is most viral. We measure our status by how far our internet-transmitted conditions have propagated themselves.

This leads to the much-discussed phenomenon of passing along links without reading them or checking their source. For Facebook users, the important thing isn’t typically to spread accurate information – after all, people in a given social circle often share the same links every day – but to elicit a reflection of our own correctness, whether it is expressed with smiley faces or frowny faces. We can tell we are correct by the fact that everyone approves of us, which is to say that everyone has caught the same virus. Self-expression and conformity strangely overlap as we compete to see who can be the most alike, meaning who can be the first to say “what we’re all thinking.”

Attempting to use Facebook as a discussion forum may seem to have better chances. Alongside the infrastructure of “liking,” we initially appear to have nothing but an empty box – a free-form discursive medium – that, when filled in, gives birth to a comment thread not unlike those found in nearly every internet discussion tool since Usenet. Here, too, however, the inertial pull of passing judgment is strong. Productive discussion requires at least a modicum of critical distance, a willingness to entertain unfamiliar or even opposed views for the sake of argument, but the Facebook interface throws up obstacles to this. Inscribed in the comment box itself is a little picture of you, making every comment personal – as much about you as about what is being said. Indeed, as you scroll down the page, you see your own picture over and over again, inviting you to propagate your image further and further. And when others appear in the comments to your status, the same dynamics apply as with shared links – they are an opportunity to pile on, either with praise or with abuse, reinforcing the mutual regard of the “hot.”

[…]

Whatever else the internet is, then, the hegemony of these forms of social media have turned it into an increasingly efficient machine for judgment – for passing judgment, for eliciting judgment, for soliciting judgment. And the more attention you attract, the more likely it becomes that you will face an overwhelming backlash of negative judgment. Andy Warhol famously asserted that in the future, everyone would be famous for 15 minutes, but he didn’t specify what form that fame might take. Were he still alive, he might now say that in the future, everyone will be hated by thousands of strangers for 15 minutes.

The pressure to perform in a way that will win the approval of others – and to avoid the loss of face that comes with failure to do so – doesn’t just influence people’s behavior on social media, either. The effect is so powerful that it can even drive them to physical violence in the real world. As Pinker notes:

Studies of American street violence […] have found that the presence of an audience doubles the likelihood that an argument between two men will escalate to violence.

And as Tim O’Brien reveals in his description of combat soldiers in Vietnam, this desire to avoid embarrassment and maintain a respectable self-image can override even the most fundamental human instincts, like self-preservation:

They were afraid of dying but even more afraid to show it. […] They carried the soldier’s greatest fear, which was the fear of blushing. Men killed, and died, because they were embarrassed not to.

These are extreme examples, obviously. But the point is clear: The pressure of being judged by an audience can have an overwhelmingly distortionary effect on a person’s behavior. If your goal is to have dispassionate conversations, then, and to avoid situations that might cause either side to become emotionally invested in their arguments and act irrationally, it’s generally more productive to have those conversations in private – or if you can’t have them in private, to at least try to minimize the performative pressure as much as possible. You might not get as much social prestige from gently pulling someone aside and speaking with them privately as you would from publicly calling them out in a highly visible way, but you are more likely to actually change their views and/or learn something valuable yourself.

There’s another reason why I bring this up, though – a meta-reason, actually, concerning the writing on this site itself. After all, it’s kind of hard to apply the principles of private, pressure-free discussion to a platform like this one, which by definition is public and therefore subject to popular judgment. Still, I think there are ways to at least try to minimize the detrimental effects, so that’s what I want to try to do here. A lot of popular writing nowadays is geared toward embracing the public debate mentality and leaning into the pressures it creates; everybody is writing with the aim of pushing their views on their audiences and trying to make them agree with those views. I want to try a different approach, though; instead of trying to proselytize and convert readers to my worldview, I just want to lay out a basic description of what my worldview is and why I believe it, as if I were just dispassionately making a catalog of all my beliefs for reference purposes. Aside from this post you’re reading now, of course, which really is a prescriptive one where I’m trying to convince readers to agree with the ideas I’ve been describing (how to have better discussions, how to think more independently, etc.), I want to try and write about my more object-level beliefs (on religion, politics, etc.) in a way that’s more of just a straightforward summary – what I believe confidently, what I’m not so sure about, and so forth. (Of course, at some level I do want to convert people to my worldview, simply because I believe it’s true – it would be disingenuous to act like I didn’t want people to believe what was true – but that won’t be my sole reason for writing, at least.) I know that if I obsess too much over how my ideas will be received by some hypothetical audience, I’ll be tempted to pander or embellish too much in some areas while hedging or holding back too much in others – so instead I just want to write as if I’m writing for myself alone, in order to get my thoughts on paper and have an inventory of my beliefs that I can refer back to as needed. I’m sure I won’t be completely successful in the attempt (even in this post I already feel like I’ve been writing way too self-consciously), but to the best of my ability I want to see if I can treat this website simply as a platform for me to think out loud, so to speak, without any expectation that it should be anything more or less than that. By taking that approach – that I’m just describing my beliefs, not trying to convert anyone – my hope is that I’ll be able to write more easily and not be paralyzed by the fear of criticism or the pressure of trying to impress anyone. If I’m not entirely sure about a particular idea, I won’t have to feel like I’m committing myself to it just by bringing it up; I’ll be able to just throw it out there for discussion, mention my own ambivalence toward it, and see what everyone else thinks. As Film Crit Hulk puts it, I’ll be able to simply explore – “to share an idea and see where it goes. To discover more about the idea and see how it bounces off people and then learn even more after that.” And if I do get responses from people whose thoughts on the subject are different from my own, I won’t have to feel like I’ve planted my flag on a particular stance and am now compelled to defend it at all costs; I’ll be able to embrace the strongest of their critiques and incorporate them into my own position as a kind of ideological upgrade. Even better, if the person on the other side is taking a similar approach themselves, I’ll be able to work cooperatively with them – comparing ideas, learning from each other, and evaluating each other’s beliefs critically without having to worry about coming under attack for it. If neither of us is invested in trying to crush the other one or score cheap debate points, we’ll actually be able to put our heads together and help each other come to a better understanding of which of our ideas are well-founded and which are mistaken. And if the other person happens to be better-informed than I am, I’ll be able to use their knowledge to raise myself higher up that metaphorical mountain of understanding, rather than digging in my heels and resisting their insights to my own detriment.

Now, I realize that the prospect of putting your thoughts out there into the public eye for everyone to judge can be a tricky one. As Gary Klein notes, even just the expectation of having to share your thoughts can constrict your thinking at a subconscious level:

[Obsessively high standards for error avoidance] make us reluctant to speculate. The pressure to avoid errors can make us distrust any intuitions or insights we cannot validate.

In such a self-conscious mode of thinking, we’re less likely to come up with new insights or discoveries, because we don’t allow ourselves to take a more playful, freewheeling approach with our ideas. Instead of freely exploring all the most interesting possibilities, we just try to stay within the range of consensus opinion and only volunteer our “safest” ideas for judgment. And sure, this is a good way to avoid getting too much backlash for your ideas; but it also means that if there are any good ideas that happen to fall outside the consensus opinion, you’ll never be able to find them.

One solution, of course, is just to refrain from ever sharing your views in public altogether. Holiday describes one historical instance of this:

General George C. Marshall […] refused to keep a diary during World War II despite the requests of historians and friends. He worried that it would turn his quiet, reflective time into a sort of performance and self-deception. That he might second-guess difficult decisions out of concern for his reputation and future readers and warp his thinking based on how they would look.

Marshall’s self-awareness was admirable, given the situation he was in. Like Odysseus tying himself to the mast of his ship so he wouldn’t be able to act irrationally, Marshall’s decision to restrain himself from sharing his thoughts turned out to be the right one. Even so, his situation was pretty unique; he was in a position of power that allowed him to make the most of his ideas while still keeping them classified. For most of us, the situation is the exact opposite – the only way of making a difference with our ideas is by sharing them with others. Deciding to do that, then – putting all your cards on the table and letting them be subjected to public scrutiny – means you have to just bite the bullet and embrace that scrutiny. If you have a weird idea that falls outside the range of standard public opinion, you can’t be so afraid of its heterodoxy that you hesitate to even bring it up. You should put it forward fearlessly – and if you turn out to be wrong, then you should celebrate the fact that you were able to find that out quickly and correct course rather than continuing to walk around with a flawed idea inside your head. As Tetlock says:

I think humility is an integral part of being a superforecaster, but that doesn’t mean superforecasters are chickens who hang around the “maybe” zone and never say anything more than minor shades of maybe. You don’t win a forecasting tournament by saying maybe all the time. You win a forecasting tournament by taking well-considered bets.

So although I referred earlier to Manson’s advice to “hold weaker opinions,” a better phrasing might be Paul Saffo’s formulation: “strong opinions, weakly held.”

I have found that the fastest way to an effective forecast is often through a sequence of lousy forecasts. Instead of withholding judgment until an exhaustive search for data is complete, I will force myself to make a tentative forecast based on the information available, and then systematically tear it apart, using the insights gained to guide my search for further indicators and information. Iterate the process a few times, and it is surprising how quickly one can get to a useful forecast.

Since the mid-1980s, my mantra for this process is “strong opinions, weakly held.” Allow your intuition to guide you to a conclusion, no matter how imperfect – this is the “strong opinion” part. Then – and this is the “weakly held” part – prove yourself wrong. Engage in creative doubt. Look for information that doesn’t fit, or indicators that pointing in an entirely different direction. Eventually your intuition will kick in and a new hypothesis will emerge out of the rubble, ready to be ruthlessly torn apart once again. You will be surprised by how quickly the sequence of faulty forecasts will deliver you to a useful result.

I have to admit, it’s hard to write something for public viewing and not feel at least a little bit self-conscious about it. Aside from the awareness that I might be embarrassed by certain ideas that turn out to be wrong, it also just feels weirdly presumptuous to think that anyone should be interested in what I have to say – or worse still, that “the world needs to hear what I have to say!” It feels like an act of vanity or something.

Still, there’s always the possibility that some of the ideas on this site might actually turn out to be good ones, and that they might even help in some way. If nothing else, getting some feedback on some of these ideas might be helpful to me personally, just to clarify my beliefs and help correct the ones that are mistaken. I don’t know how likely it is that I’ll actually get that kind of feedback, of course – in fact, if all my other posts turn out to be as long as this one, I doubt that more than a handful of people will ever even read them at all – but at this point, I’ve gotten curious enough about it to at least be willing to throw a few ideas out there and see if anything sticks.

XVII.

So all right, let’s lay some groundwork. Whenever you’re talking about ideologically-charged issues like politics or religion, I always think it’s a good idea to try and adhere to certain meta-ideological principles – sort of general-purpose rules of thumb that can be broadly applied across all kinds of different contexts. For me, the first and foremost of these is that you should “do what works” – i.e. you should take cues from real-world facts and experiences rather than relying on pure theory alone. When you’re trying to figure out what would be the best system of government, or the best system of economics or social organization or whatever, it can be incredibly easy to become enamored with some fantastic utopian system you’ve discovered that seems to explain everything and could theoretically fix all the world’s problems – e.g. communism, anarchism, objectivism, that kind of thing. (I’ll admit to being particularly susceptible to this kind of thinking myself.) But a lot of ideas sound flawless in theory and then turn out to break down for unforeseen reasons in practice – so if you look around and find that your favored idea has never actually been implemented successfully in the real world, it may be a red flag that the system might not actually be as flawless as it seems to be in your mind. There may be some hidden variables there that you haven’t accounted for. A better approach, then, is to see what actually has been implemented successfully in the real world – what’s actually working in the parts of the world where people are the happiest, healthiest, and most successful – and then emulate that. Or at least use those ideas as a starting point.

The Wikipedia entry for “empiricism” sums it up pretty well:

Empiricism in the philosophy of science emphasises evidence, especially as discovered in experiments. It is a fundamental part of the scientific method that all hypotheses and theories must be tested against observations of the natural world rather than resting solely on a priori reasoning, intuition, or revelation.

Empiricism, often used by natural scientists, says that “knowledge is based on experience” and that “knowledge is tentative and probabilistic, subject to continued revision and falsification”. Empirical research, including experiments and validated measurement tools, guides the scientific method.

To me, this seems like about as solid of a foundation for an accurate worldview as you can get. Base your ideas on what’s verifiably true. Do what works. And have enough intellectual humility to know that “what works” might not always be immediately obvious, and what you think should work in your head might not always map perfectly to the facts on the ground.

In that same vein, I think it’s important to realize that when you don’t have all of the expertise to make the most informed possible judgment on a specific topic (and in particular here I’m thinking of issues relating to the hard sciences), it’s usually smartest to entrust that judgment to those who do, and to take what the experts have to say as a starting point for your own beliefs. After all, it’s impossible to know all the relevant information you’d need to judge every issue as accurately as possible; there will be a lot of cases where others are more knowledgeable or more qualified to determine the truth of a particular matter than you are – and in those cases, it’s just a fact that their judgment therefore has a better chance of being accurate than yours does. (Pretending otherwise is how we end up with flat-earthers and anti-vaxxers telling people to “do their own research.”) If you think you’ve found some fundamental mistake in the way the world works that everyone else is overlooking, it’s possible that you really have made an important discovery, but it’s more likely that you’re the one who’s overlooked something. Acknowledging this fact isn’t an admission of stupidity; it’s just a recognition that people have different specialties and different areas of expertise, and the most accurate worldview will be one that pools together all the different pieces of knowledge from their most reliable sources.

There’s a concept in economics called the “efficient market hypothesis,” which Alexander explains like this:

The market economy is very good at what it does, which is something like “exploit money-making opportunities” or “pick low-hanging fruit in the domain of money-making”. If you see a $20 bill lying on the sidewalk, today is your lucky day. If you see a $20 bill lying on the sidewalk in Grand Central Station, and you remember having seen the same bill a week ago, something is wrong. Thousands of people cross Grand Central every week – there’s no way a thousand people would all pass up a free $20. Maybe it’s some kind of weird trick. Maybe you’re dreaming. But there’s no way that such a low-hanging piece of money-making fruit would go unpicked for that long.

In the same way, suppose your uncle buys a lot of Google stock, because he’s heard Google has cool self-driving cars that will be the next big thing. Can he expect to get rich? No – if Google stock was underpriced (ie you could easily get rich by buying Google stock), then everyone smart enough to notice would buy it. As everyone tried to buy it, the price would go up until it was no longer underpriced. Big Wall Street banks have people who are at least as smart as your uncle, and who will notice before he does whether stocks are underpriced. They also have enough money that if they see a money-making opportunity, they can keep buying until they’ve driven the price up to the right level. So for Google to remain underpriced when your uncle sees it, you have to assume everyone at every Wall Street hedge fund has just failed to notice this tremendous money-making opportunity – the same sort of implausible failure as a $20 staying on the floor of Grand Central for a week.

In the same way, suppose there’s a city full of rich people who all love Thai food and are willing to pay top dollar for it. The city has lots of skilled Thai chefs and good access to low-priced Thai ingredients. With the certainty of physical law, we can know that city will have a Thai restaurant. If it didn’t, some entrepreneur would wander through, see that they could get really rich by opening a Thai restaurant, and do that. If there’s no restaurant, we should feel the same confusion we feel when a $20 bill has sat on the floor of Grand Central Station for a week. Maybe the city government banned Thai restaurants for some reason? Maybe we’re dreaming again?

And this concept can apply to ideas as well. He continues:

We can take this beyond money-making into any competitive or potentially-competitive field. Consider a freshman biology student reading her textbook who suddenly feels like she’s had a deep insight into the structure of DNA, easily worthy of a Nobel. Is she right? Almost certainly not. There are thousands of research biologists who would like a Nobel Prize. For all of them to miss a brilliant insight sitting in freshman biology would be the same failure as everybody missing a $20 on the floor of Grand Central, or all of Wall Street missing an easy opportunity to make money off of Google, or every entrepreneur missing a great market opportunity for a Thai restaurant. So without her finding any particular flaw in her theory, she can be pretty sure that it’s wrong – or else already discovered. This isn’t to say nobody can ever win a Nobel Prize. But winners will probably be people with access to new ground that hasn’t already been covered by other $20-seekers. Either they’ll be amazing geniuses, understand a vast scope of cutting-edge material, have access to the latest lab equipment, or most likely all three.

It’s for this reason that maintaining your intellectual humility is such a logical strategy when trying to figure out what’s true. Putting your own judgment above that of the expert consensus can be a valid approach if you happen to be in a situation where you really are one of those people with privileged information, expertise, or resources that nobody else has access to. But otherwise, presuming to be able to come up with the best answers on your own, from the comfort of your armchair, just isn’t typically as reliable a source of insight as judiciously referring to the knowledge of experts and getting your cues from the way things have been successfully implemented in the real world – trusting the wisdom of the intellectual market, so to speak.

If you’re getting your information from good sources, it turns out that expert consensus is actually a pretty darn reliable guide to knowing what works and what doesn’t. And if you can use that knowledge base as your benchmark, then you can start to spin off new ideas of your own and explore new ideological directions. But you’ve got to know what knowledge is already out there first.

Green shares his thoughts on the subject:

My life this week has been dictated by natural phenomena. As Montana has continued through an unseasonable hot and dry summer, my valley has been socked with smoke, sometimes enough that experts advise me not to leave my house […] and that got substantially worse when the nearest fire in Lolo National Forest flared and burned more than 9,000 acres in one evening. […] And then of course, this Monday there was the eclipse. One of the remarkable things about the eclipse is that we knew exactly when it was going to happen for decades in advance – we had enough lead time to get a bunch of eclipse glasses made and sent to gas stations all over the country; people booked hotel rooms and braved terrible traffic – though not before first making sure that the sun would be out in the place where they were going – and we trusted all of those things. I couldn’t tell you how people figured out when eclipses were going to show up before computers, but they did it. I also don’t know how scientists fight fires, or predict them, or know to tell me when I shouldn’t go outside to avoid damaging my lungs. But I do know that someone knows. Someone knows when the eclipse will happen; someone knows when fires will probably get worse; and they’ve had their work checked by other people who also know. I don’t have the space in my head to figure all these things out on my own; and like, good thing, because the story of human progress is not a story of every single person figuring out every single thing for themselves.

[…]

When telling people that I trust experts, I sometimes hear people respond that I’m committing a logical fallacy – appealing to authority. And this is a fallacy when making an argument; saying “But this expert says so” is not a good argument. But I’m not actually making an appeal to one person; I’m making an appeal to a process that has had a good deal of success at accurately explaining and predicting stuff. The expert is the proxy for the process. I’m making an appeal, not just to the people who have figured something out, but to all of the other people who I know are going to check their work. I’m making an appeal to statistics and logic and calculus and peer review. I’m making an appeal to science. And also, I’m often not making an argument – I’m not trying to convince people of what they should believe – I’m trying to explain that when it comes to things that I’m not that interested in, or capable of learning enough about, I’m happy to accept scientific consensus because it’s going to be a whole lot better than whatever hunch I happen to have. This isn’t trusting that they’re right; it’s accepting that people who study things for a living have a far greater chance of being right than someone who just feels like arguing with them. It’s not an appeal to authority; it’s like an appeal to sanity. I get worried about the current tendency of some to feel like they must confirm everything for themselves. That’s individualism taken to maybe a fatal extreme. Human progress is a story of building on the work of others, not every single person starting from scratch. I agree that trusting individuals because they’ve had the “expert” label applied to them can lead to trouble; but one of the great achievements of our culture, of our society, is the creation and refinement of robust systems for creating and identifying trustable expertise. Of course those systems can always be refined and improved, but those who think they can just be discarded… scare me. Watching these beautiful and terrible plumes of smoke flare above my town, I felt very grateful to those who study these things for a living so I do not have to.

Of course, all this talk about trusting experts naturally raises another question: How can you determine who the real experts are in the first place? After all, the flat-earthers and anti-vaxxers will insist that they’re the ones listening to the real experts, and that the so-called mainstream scientists everyone else listens to are actually just phonies and shills. How can you know which “experts” are actually legitimate and worthy of being trusted, and which are just quacks claiming to be experts? For that matter, how can you know how much to trust your own expertise? If you really do know better than the expert consensus on a particular topic, how can you tell?

Again, it all comes back to empiricism – making sure that the sources of information you’re drawing from and the experts you’re listening to are verifiably accurate. If you can pay attention not only to how each side of a particular debate is saying the world is, but also to what they’re predicting will happen in the future, then you can go back later and see whose predictions ended up being the most accurate, and thereby get a good feel for whose ideology most closely reflects reality.

(As it happens, there was actually a study done on this in 2011; researchers tested the predictions of a wide range of pundits and politicians, and found that Paul Krugman was the most accurate, Cal Thomas was the least accurate, and the more liberal-leaning prognosticators tended to do better than the conservative-leaning ones on average – suggesting that Krugman’s views are worth listening to, Thomas’s probably less so, and the liberal worldview more closely reflects reality in general than the conservative one. This is pretty convenient for me to accept, of course, since I tend to identify more closely with the liberal tribe than the conservative tribe most of the time anyway – but this is only one (possibly flawed or outdated) study; and if more studies came out showing that the conservative pundits were the ones making more accurate predictions now, it should make me want to shift the balance of my attention accordingly.)

Likewise, in the same way that you can track the predictions of different commentators and experts to see which ones are the most accurate, you can also keep a personal scorecard for your own predictions. If you think that the Fed’s policy of quantitative easing will lead to hyperinflation, or that President Obama’s election will lead to the mass confiscation of guns, or that the global supply of oil will peak by 2010, you should write out these predictions in advance, note your level of confidence in each of them, and then look back later to see how accurate you ended up being. Did you do better than the experts? Did you do better than random chance? Depending on your results, you can either be more confident or less confident in your own judgments, and you can accordingly give them more weight or less weight than you give the opinions of other commentators you’ve kept track of. Chances are, there will be at least a few experts whose predictive abilities outperform your own – and those are the people you should be listening to and learning from.

There are a lot of people who think that the “expert consensus” is always a product of corruption and incompetence – and that, for that matter, our whole society and culture and political system are irredeemably broken in a fundamental way. According to this worldview, things are worse than they’ve ever been – the True Way is continually being suppressed and forgotten – and the crisis is only growing worse. Every day we stray further from the truth.

But I don’t think this is right. Despite slipping backwards at times (sometimes significantly so), I think the overall trend for our species is to progress more and more in the right direction. The arc of humanity bends toward truth – for the simple reason that, firstly, smart people tend to be right more often; secondly, others tend to recognize who the smartest people are when they hear from them; and thirdly, they tend to pay attention to the smart people and listen to what they have to say. This isn’t always how it goes, of course – the imbalance in favor of intelligence might not even be that big – but I think that on average, there’s a reason why the foremost experts and decision-makers tend to be (and work closely alongside) people who are more intelligent than the typical schlub on the street. Overall, the ideas that come out on top tend to be the ones that are better.

On a related note, this is also why I’m wary of any ideology that calls for “tearing down the whole rotten system and starting over.” It’s true that our society has a lot of problems, and that some of these are very big, very serious problems. But we’ve also got a lot of good things going for us – a lot of social norms, institutions, and practices that really do make our lives better in vital ways, and that are crucial to preserve. (If you don’t believe it, try visiting a third-world country and comparing their institutions to our own.) We have to be extremely careful, then, not to throw the baby out with the bathwater when considering radical changes to the system. As Alexander warns, “There are many more ways to break systems than to improve them.” So although it may be easy to think that you’ve come up with some utopian solution that could solve all of our social ills by tearing down the current system, it’s worth recalling the idea of Chesterton’s Fence mentioned earlier. In all likelihood, the reason your brilliant idea hasn’t been implemented is not because nobody’s ever thought of it before, but because there’s a bull hidden behind the fence you want to tear down, and overturning the current system would create more complications than it solves. Alexander continues:

Since [studying Marx], one of the central principles behind my philosophy has been “Don’t destroy all existing systems and hope [that some invisible force of goodness will just miraculously make] everything work out”. Systems are hard. Institutions are hard. If your goal is to replace the current systems with better ones, then destroying the current system is 1% of the work, and building the better ones is 99% of it. Throughout history, dozens of movements have doomed entire civilizations by focusing on the “destroying the current system” step and expecting the “build a better one” step to happen on its own. That never works.

And Pinker adds:

[According to the worldview of thinkers like Edmund Burke,] however imperfect society may be, we should measure it against the cruelty and deprivation of the actual past, not the harmony and affluence of an imagined future. We are fortunate enough to live in a society that more or less works, and our first priority should be not to screw it up, because human nature always leaves us teetering on the brink of barbarism. And since no one is smart enough to predict the behavior of a single human being, let alone millions of them interacting in a society, we should distrust any formula for changing society from the top down, because it is likely to have unintended consequences that are worse than the problems it was designed to fix. The best we can hope for are incremental changes that are continuously adjusted according to feedback about the sum of their good and bad consequences. […] In Burke’s famous words, written in the aftermath of the French Revolution:

[One] should approach to the faults of the state as to the wounds of a father, with pious awe and trembling solicitude. By this wise prejudice we are taught to look with horror on those children of their country who are prompt rashly to hack that aged parent in pieces, and put him into the kettle of magicians, in hopes that by their poisonous weeds, and wild incantations, they may regenerate the paternal constitution, and renovate their father’s life.

I think this is a really important insight; being too cavalier with systems that affect millions of people can lead to disaster. On a scale from “gradualist” to “revolutionary,” I tend to fall squarely on the gradualist side of the spectrum most of the time.

Having said all this, though, I should also point out that it’s possible to take this kind of prudent conservatism too far; if you’re too scared of upsetting the balance of the status quo, you may end up refusing to ever attempt to improve anything at all. It’s true that expert consensus is almost always a better indicator of truth than one particular person’s individual views – and it’s true that trying to make radical transformations to society will almost always have unforeseen negative consequences that outweigh the positive ones – but there’s also a reason why I include the word “almost” in both of those statements. Every now and then, the conventional wisdom really is completely off base, and every now and then there really is an opportunity to improve things that hasn’t been implemented yet. If you travelled back to the 1700s, for instance, the conventional wisdom would have been that it was OK to own slaves – and anyone arguing that black people should not only be free, but should have full equality under the law, would have been considered a fringe radical. Even as recently as a few decades ago, if you were trying to argue for gay rights, the only way to remain within the Overton Window of respectable mainstream opinion would have been to hedge your position and include a bunch of caveats clarifying that you weren’t necessarily advocating for anything as extreme and disruptive as full marriage rights (Heaven forbid), just that you wanted to reduce the severity of the anti-sodomy laws. In retrospect, of course, we can see that in both of those cases the conventional wisdom of the time was simply wrong; the Overton Window was far from where it should have been. Taking the outside view of this fact, then, what makes us think that we just happen to have been born into the exact point in human history where, for the first time ever, there are no such inefficiencies or failures that could be improved upon, and everything is working exactly as well as it possibly could? Are we really to believe that all the good ideas have been had already?

The efficient market hypothesis is a powerful explanatory tool for describing ideal markets – and it can be tempting to want to generalize it to encompass the totality of human experience. (Call it the “efficient world hypothesis.”) But to say that everything is working as well as it possibly can and that there’s no room for useful new ideas would be, in my opinion, naïve to say the least. As Alexander explains in his discussion of Yudkowsky’s writings on the subject:

Go too far with this kind of logic, and you start accidentally proving that nothing can be bad anywhere.

Suppose you thought that modern science was broken, with scientists and grantmakers doing a bad job of focusing their discoveries on truly interesting and important things. But if this were true, then you (or anyone else with a little money) could set up a non-broken science, make many more discoveries than everyone else, get more Nobel Prizes, earn more money from all your patents and inventions, and eventually become so prestigious and rich that everyone else admits you were right and switches to doing science your way. There are dozens of government bodies, private institutions, and universities that could do this kind of thing if they wanted. But none of them have. So “science is broken” seems like the same kind of statement as “a $20 bill has been on the floor of Grand Central Station for a week and nobody has picked it up”. Therefore, modern science isn’t broken.

Or: suppose you thought that health care is inefficient and costs way too much. But if this were true, some entrepreneur could start a new hospital / clinic / whatever that delivered health care at lower prices and with higher profit margins. All the sick people would go to them, they would make lots of money, investors would trip over each other to fund their expansion into new markets, and eventually they would take over health care and be super rich. So “health care is inefficient and overpriced” seems like the same kind of statement as “a $20 bill has been on the floor of Grand Central Station for a week and nobody has picked it up.” Therefore, health care isn’t inefficient or overpriced.

Or: suppose you think that US cities don’t have good mass transit. But if lots of people want better mass transit and are willing to pay for it, this is a great money-making opportunity. Entrepreneurs are pretty smart, so they would notice this money-making opportunity, raise some funds from equally-observant venture capitalists, make a better mass transit system, and get really rich off of all the tickets. But nobody has done this. So “US cities don’t have good mass transit” seems like the same kind of statement as “a $20 bill has been on the floor of Grand Central Station for a week and nobody has picked it up.” Therefore, US cities have good mass transit, or at least the best mass transit that’s economically viable right now.

This proof of God’s omnibenevolence is followed by Eliezer’s observations that the world seems full of evil. For example:

Eliezer’s wife Brienne had Seasonal Affective Disorder. The consensus treatment for SAD is “light boxes”, very bright lamps that mimic sunshine and make winter feel more like summer. Brienne tried some of these and they didn’t work; her seasonal depression got so bad that she had to move to the Southern Hemisphere three months of every year just to stay functional. No doctor had any good ideas about what to do at this point. Eliezer did some digging, found that existing light boxes were still way less bright than the sun, and jury-rigged a much brighter version. This brighter light box cured Brienne’s depression when the conventional treatment had failed. Since Eliezer, a random layperson, was able to come up with a better SAD cure after a few minutes of thinking than the establishment was recommending to him, this seems kind of like the relevant research community leaving a $20 bill on the ground in Grand Central.

Eliezer spent a few years criticizing the Bank of Japan’s macroeconomic policies, which he (and many others) thought were stupid and costing Japan trillions of dollars in lost economic growth. A friend told Eliezer that the professionals at the Bank surely knew more than he did. But after a few years, the Bank of Japan switched policies, the Japanese economy instantly improved, and now the consensus position is that the original policies were deeply flawed in exactly the way Eliezer and others thought they were. Doesn’t that mean Japan left a trillion-dollar bill on the ground by refusing to implement policies that even an amateur could see were correct?

And finally:

For our central example, we’ll be using the United States medical system, which is, so far as I know, the most broken system that still works ever recorded in human history. If you were reading about something in 19th-century France which was as broken as US healthcare, you wouldn’t expect to find that it went on working when overloaded with a sufficiently vast amount of money. You would expect it to just not work at all.

In previous years, I would use the case of central-line infections as my go-to example of medical inadequacy. Central-line infections, in the US alone, killed 60,000 patients per year, and infected an additional 200,000 patients at an average treatment cost of $50,000/patient.

Central-line infections were also known to decrease by 50% or more if you enforced a five-item checklist that included items like “wash your hands before touching the line.”

Robin Hanson has old Overcoming Bias blog posts on that untaken, low-hanging fruit. But I discovered while re-Googling in 2015 that wider adoption of hand-washing and similar precautions are now finally beginning to occur, after many years – with an associated 43% nationwide decrease in central-line infections. After partial adoption.

Since he doesn’t want to focus on a partly-solved problem, he continues to the case of infant parenteral nutrition. Some babies have malformed digestive systems and need to have nutrient fluid pumped directly into their veins. The nutrient fluid formula used in the US has the wrong kinds of lipids in it, and about a third of babies who get it die of brain or liver damage. We’ve known for decades that the nutrient fluid formula has the wrong kind of lipids. We know the right kind of lipids and they’re incredibly cheap and there is no reason at all that we couldn’t put them in the nutrient fluid formula. We’ve done a bunch of studies showing that when babies get the right nutrient fluid formula, the 33% death rate disappears. But the only FDA-approved nutrient fluid formula is the one with the wrong lipids, so we just keep giving it to babies, and they just keep dying. Grant that the FDA is terrible and ruins everything, but over several decades of knowing about this problem and watching the dead babies pile up, shouldn’t somebody have done something to make this system work better?

We’ve got a proof that everything should be perfect all the time, and a reality in which a bunch of babies keep dying even though we know exactly how to save them for no extra cost.

It’s true that in most cases, the system tends to be the way it is for a reason; if you tried to change it, you’d quickly realize why it had been set up the way it was in the first place, and you’d want to change it back. Coming up with a revolutionary new idea that could legitimately change the world for the better is rare – like discovering a proverbial million dollar bill on the ground that nobody’s picked up yet – and if you think you’ve discovered one, your default reaction should be to treat that idea with caution, even outright suspicion. But just because the odds of hitting the jackpot are small doesn’t mean that no one can ever do so – people win the lottery every day. And the same is true for ideas; even if the odds of coming up with a world-changing idea are tiny, every good idea that has ever emerged throughout history necessarily has to have had someone who was the first person to have thought of it. So if you should happen to encounter such a case yourself, you shouldn’t just automatically dismiss it as too improbable to exist – that would be like saying there can be no such thing as lottery winners. You should take it as seriously as the potential payoff demands.

There’s a school of thought that tries to defend the efficient world hypothesis by saying that even though our society is markedly different now from how it was during the days of feudalism, monarchy, and so forth, that’s not because those systems were anything less than optimal at the time – it’s just that our more advanced education levels, cultural norms, and other social traits allow us to implement systems nowadays that wouldn’t have been workable back then. In other words, feudalism and monarchy really were the best possible systems for the people living in those eras, because their low level of social development wouldn’t have been strong enough to sustain anything better. And similarly, countries living under totalitarian regimes today essentially have no better option, because if their societies were capable of handling democracy, they’d already be democratic. According to this view, the status quo is always a product of societies settling naturally into their optimal equilibrium; in every time and every place, the way things work is a close approximation of the best way it’s possible for them to work, given the level of development in that society.

Personally, I don’t buy this view (for a lot of reasons). But even if it were actually true – even if the status quo were always optimal, given the specific circumstances of a particular society – that still wouldn’t suggest that any new idea to change the system must therefore automatically be wrong. After all, the circumstances of a society are always shifting, and it’s always possible for the ideological winds to shift in such a way that a society may finally become ready to accept a new idea that it wasn’t previously ready for. The issue of gay marriage is a classic example where the national consensus went from “not even a debate” to “absolutely a debate” to “not even a debate” again – but this time in the other direction – in just a few short years. An idea that would have seemed like a pipe dream a mere decade or two earlier finally saw its time come – and that wouldn’t have happened if everybody had believed that improving the status quo was impossible. As Yudkowsky says:

Not every change is an improvement, but every improvement is a change […] You can’t do anything better unless you can manage to do it differently.

What’s more, once a major change is finally made, it’s often hard to remember why it ever seemed like such a big deal in the first place. Ideas like gay marriage – not to mention ideas like desegregation and even democracy itself – had a lot of people up in arms when they were first proposed, raising all kinds of uproar about the downfall of civilization (and admittedly, there are still people who feel this way about them now). But for most modern onlookers, those objections now seem absurd. We implemented these supposedly radical ideas, and the world kept turning just as it had before. Things improved considerably, in fact, and we’ve reached the point now where these ideas that used to be considered radical are now considered to be part of the bare minimum standard for normality. As Richard Dawkins puts it: “Yesterday’s dangerous idea is today’s orthodoxy and tomorrow’s cliché.”

So when it comes to considering outlandish new ideas, we can’t be afraid to explore new ideological territory, to speculate, to experiment. We should keep our experimentation within rational limits, of course; we shouldn’t want to blow up the entire system just for the sake of shaking things up and trying something new. But we can’t be so afraid of disrupting the parts of the system that are working well that we refuse to even consider improving the parts that aren’t. (After all, I doubt the people who were being forced to live as second-class citizens under the gay marriage bans and segregation laws would have agreed that the system was “working fine” for them.) Evolution requires variation – so if we want to evolve as a society, we have to be willing to try out different ideas. The only way to “do what works” is to first see what works.

Again, this doesn’t mean that you should always just assume that the most counterintuitive position is the best one. As satisfying as it might be to feel like you’ve uniquely figured out some secret truth that nobody else has, knee-jerk contrarianism isn’t any better than knee-jerk conformity – because by definition, the counterintuitive position is typically more likely to be the wrong one. Alexander illustrates this point with a couple of examples:

Ask any five year old child, and [they] can tell you that death is bad. Death is bad because it kills you. There is nothing subtle about it, and there does not need to be. Death universally seems bad to pretty much everyone on first analysis, and what it seems, it is.

But as has been pointed out, along with the gigantic cost, death does have a few small benefits. It lowers overpopulation, it allows the new generation to develop free from interference by their elders, it provides motivation to get things done quickly. Precisely because these benefits are so much smaller than the cost, they are hard to notice. It takes a particularly subtle and clever mind to think them up. Any idiot can tell you why death is bad, but it takes a very particular sort of idiot to believe that death might be good.

So pointing out this contrarian position, that death has some benefits, is potentially a signal of high intelligence. It is not a very reliable signal, because once the first person brings it up everyone can just copy it, but it is a cheap signal. And to the sort of person who might not be clever enough to come up with the benefits of death themselves, and only notices that wise people seem to mention death can have benefits, it might seem super extra wise to say death has lots and lots of great benefits, and is really quite a good thing, and if other people should protest that death is bad, well, that’s an opinion a five year old child could come up with, and so clearly that person is no smarter than a five year old child. Thus Eliezer’s title for this mentality, “Pretending To Be Wise”.

If dwelling on the benefits of a great evil is not your thing, you can also pretend to be wise by dwelling on the costs of a great good. All things considered, modern industrial civilization – with its advanced technology, its high standard of living, and its lack of typhoid fever – is pretty neat. But modern industrial civilization also has many costs: alienation from nature, strains on the traditional family, the anonymity of big city life, pollution and overcrowding. These are real costs, and they are certainly worth taking seriously; nevertheless, the crowds of emigrants trying to get from the Third World to the First, and the lack of any crowd in the opposite direction, suggest the benefits outweigh the costs. But in my estimation – and speak up if you disagree – people spend a lot more time dwelling on the negatives than on the positives, and most people I meet coming back from a Third World country have to talk about how much more authentic their way of life is and how much we could learn from them. This sort of talk sounds Wise, whereas talk about how nice it is to have buses that don’t break down every half mile sounds trivial and selfish.

So my hypothesis is that if a certain side of an issue has very obvious points in support of it, and the other side of an issue relies on much more subtle points that the average person might not be expected to grasp, then adopting the second side of the issue will become a signal for intelligence, even if that side of the argument is wrong.

That’s why it’s worth bearing in mind: Just because an idea sounds smarter or more sophisticated or more complex than the boring old mainstream view doesn’t mean it’s actually more accurate or more useful. Sometimes it’s just fool’s gold.

So all right then, you might say, that’s all well and good – but how can we figure out which areas really are the ones where the mainstream consensus is most likely to be wrong? Well, it’s not always easy. As Klosterman writes:

If I’m wrong about something specific, it’s (usually) my own fault, and someone else is (usually, but not totally) right.

But what about the things we’re all wrong about?

What about ideas that are so accepted and internalized that we’re not even in a position to question their fallibility? These are ideas so ingrained in the collective consciousness that it seems foolhardy to even wonder if they’re potentially untrue. Sometimes these seem like questions only a child would ask, since children aren’t paralyzed by the pressures of consensus and common sense. It’s a dissonance that creates the most unavoidable of intellectual paradoxes: When you ask smart people if they believe there are major ideas currently accepted by the culture at large that will eventually be proven false, they will say, “Well, of course. There must be. That phenomenon has been experienced by every generation who’s ever lived, since the dawn of human history.” Yet offer those same people a laundry list of contemporary ideas that might fit that description, and they’ll be tempted to reject them all.

Still, if you can put yourself in the right frame of mind, it can be easier to notice flaws in the conventional wisdom. The subtitle of Klosterman’s book – “Thinking About the Present As If It Were the Past” – offers one such helpful mentality. If you can put yourself outside your current social context and try to look at things through the eyes of an outsider, you may start to notice that some of the things you’ve always taken for granted don’t actually make that much sense when you have to justify them from first principles.

Graham offers similar advice; if you notice that certain ideas only seem to dominate the popular consensus not necessarily because there are good justifications for them, but simply because it would be considered weird or shameful not to believe them, it may be a red flag that these ideas can’t actually hold up on their own strength, but are merely the product of the intellectual fashions of the day – and are therefore worth probing and questioning further:

Have you ever seen an old photo of yourself and been embarrassed at the way you looked? Did we actually dress like that? We did. And we had no idea how silly we looked. It’s the nature of fashion to be invisible, in the same way the movement of the earth is invisible to all of us riding on it.

What scares me is that there are moral fashions too. They’re just as arbitrary, and just as invisible to most people. But they’re much more dangerous. Fashion is mistaken for good design; moral fashion is mistaken for good. Dressing oddly gets you laughed at. Violating moral fashions can get you fired, ostracized, imprisoned, or even killed.

If you could travel back in a time machine, one thing would be true no matter where you went: you’d have to watch what you said. Opinions we consider harmless could have gotten you in big trouble. I’ve already said at least one thing that would have gotten me in big trouble in most of Europe in the seventeenth century, and did get Galileo in big trouble when he said it – that the earth moves.

It seems to be a constant throughout history: In every period, people believed things that were just ridiculous, and believed them so strongly that you would have gotten in terrible trouble for saying otherwise.

Is our time any different? To anyone who has read any amount of history, the answer is almost certainly no. It would be a remarkable coincidence if ours were the first era to get everything just right.

It’s tantalizing to think we believe things that people in the future will find ridiculous. What would someone coming back to visit us in a time machine have to be careful not to say? That’s what I want to study here.

[…]

Let’s start with a test: Do you have any opinions that you would be reluctant to express in front of a group of your peers?

If the answer is no, you might want to stop and think about that. If everything you believe is something you’re supposed to believe, could that possibly be a coincidence? Odds are it isn’t. Odds are you just think what you’re told.

The other alternative would be that you independently considered every question and came up with the exact same answers that are now considered acceptable. That seems unlikely, because you’d also have to make the same mistakes. Mapmakers deliberately put slight mistakes in their maps so they can tell when someone copies them. If another map has the same mistake, that’s very convincing evidence.

Like every other era in history, our moral map almost certainly contains a few mistakes. And anyone who makes the same mistakes probably didn’t do it by accident. It would be like someone claiming they had independently decided in 1972 that bell-bottom jeans were a good idea.

If you believe everything you’re supposed to now, how can you be sure you wouldn’t also have believed everything you were supposed to if you had grown up among the plantation owners of the pre-Civil War South, or in Germany in the 1930s – or among the Mongols in 1200, for that matter? Odds are you would have.

Back in the era of terms like “well-adjusted,” the idea seemed to be that there was something wrong with you if you thought things you didn’t dare say out loud. This seems backward. Almost certainly, there is something wrong with you if you don’t think things you don’t dare say out loud.

[…]

Great work tends to grow out of ideas that others have overlooked, and no idea is so overlooked as one that’s unthinkable. Natural selection, for example. It’s so simple. Why didn’t anyone think of it before? Well, that is all too obvious. Darwin himself was careful to tiptoe around the implications of his theory. He wanted to spend his time thinking about biology, not arguing with people who accused him of being an atheist.

In the sciences, especially, it’s a great advantage to be able to question assumptions. The m.o. of scientists, or at least of the good ones, is precisely that: look for places where conventional wisdom is broken, and then try to pry apart the cracks and see what’s underneath. That’s where new theories come from.

A good scientist, in other words, does not merely ignore conventional wisdom, but makes a special effort to break it. Scientists go looking for trouble. This should be the m.o. of any scholar, but scientists seem much more willing to look under rocks.

Of course, actively going against the grain of peer pressure doesn’t always come naturally; seeking out ideas that you know everybody thinks are wrong can feel, well, wrong. But one way to combat your instinctive inclination to want to follow the crowd is to imagine how you might feel differently if the crowd’s stance were different. Alexander suggests imagining which of your beliefs might change if you found out that what you thought was the popular consensus was actually an illusion (created by alien experimenters or something) and the real consensus was the complete opposite view:

[There’s] something I said a while back on Twitter:

[…]

I feel like this is a good thought experiment. Which beliefs of yours would survive that knowledge, be so strong that you would tell the experimenter that you are right or they are wrong, or make you start thinking that it’s all part of a meta-experiment like in [this] story? Which ones would you start to doubt in ways that you might not have thought of back when they were common? Which, if any, would you say “Yeah, I knew it all along, I guess I was just too scared to admit it”?

I’m thinking here of antebellum Southerners, let’s say early 1800s. Their society is built around slavery. There are a couple of abolitionists around, but not many, and none who can force anyone to listen to them. Pretty much everyone around them says slavery is okay, the books they read from the past are all about Romans or Israelites who thought (rather different forms of) slavery were okay, and they have heard a lot of plausible-sounding arguments justifying slavery.

Now bring them forward to the present day. Tell them “Right now in the present day pretty much every single person believes that slavery is morally wrong. No one would justify it. Here, come out of the laboratory and spend a few years living in our slave-free society.”

I don’t know if the Southerner would learn a whole lot of new facts during this period. They might learn that black people could be pretty capable and intelligent, but Frederick Douglass was a person, everyone knew he was smart, that didn’t change anyone’s mind. Yet even without learning many new facts, I can’t imagine he would stay pro-slavery very long.

And I wonder whether this is purely a conformity thing, and upon being returned to the antebellum South he would start conforming with them again, or whether it is a one-directional effect that primes your thoughts to go in the direction of the truth and allows you to see new valid arguments, and that upon going back to the South he would be a little wiser than his countrymen.

And I also wonder whether a sufficiently smart Southerner could do all this via a thought experiment, say “I think slavery is pretty okay now, but imagine I went to a world where everyone was absolutely certain it was terrible, how bad would I feel about it?” and get all the benefits of spending a while in our world and going through all that moral reflection without ever actually leaving the antebellum South. And if this would be a more powerful intuition pump than just asking him to sit down and think about slavery for a few hours.

This is a pretty powerful ethical test for me. I imagine waking up in that Matrix pod and being told that no one in the real world believes in abortion, that pro-choice is obviously horrible, that all my fellow experimental subjects saw through it, that as far as they can tell I’m just a psychopath. And I feel like I would still argue “No, actually, I think you guys are wrong.” (but, uh, your mileage may vary)

If it was vegetarianism – if they said no one in the real world ate meat or had tried to justify factory farming, and every single one of my co-participants had become vegan animal rights activists – I don’t think there’s a lot I could say to them. “Sorry, I have an intense disgust reaction to all vegetables which has thwarted all of my attempts at vegetarianism?” “Yeah, we know, we put that in there to make it a hard choice.”

There are some issues where I could imagine it going either way. If the alien simulators were conservative, I could imagine exactly the way in which I would feel really stupid for having ever believed in liberalism. And if the alien simulators were liberal, I could imagine exactly how it would feel to get embarrassed for ever having flirted with conservative ideas. I don’t think that’s necessarily a flaw in the thought experiment. Both of those feelings are useful to me.

There’s a similar thought exercise you can do when assessing the value of a particular idea or cultural practice, where you imagine an alternate version of history in which the idea or practice in question didn’t exist and nobody had ever even considered it before, and ask: Would it still make sense to introduce the idea into the world and start applying it in the modern day? For instance, if the practice of spanking children had never existed and somebody tried to introduce it today, would the idea still fly? How about the concept of beauty pageants? What about boxing matches? If it doesn’t seem like the idea would be widely accepted if it weren’t already a part of the status quo, it’s might be a good indication that the only reason why the idea is widely accepted in the real world is because it’s already a part of the status quo – not necessarily because it’s actually optimal to have it around. The status quo has its own kind of inertia that makes people want to resist doing things differently from how they’re already being done; again, it goes back to the mentality of “Why change something that’s already working fine?” But you have to be able to avoid falling into the trap of this status quo bias as best you can, because otherwise the results can be harmful or simply embarrassing, as Alexander points out:

Alex Tabarrok beat me to the essay on Oregon’s self-service gas laws that I wanted to write.

Oregon is one of two US states that bans self-service gas stations. Recently, they passed a law relaxing this restriction – self-service is permissable in some rural counties during odd hours of the night. Outraged Oregonians took to social media to protest that self-service was unsafe, that it would destroy jobs, that breathing in gas fumes would kill people, that gas pumping had to be performed by properly credentialed experts – seemingly unaware that most of the rest of the country and the world does it without a second thought.

…well, sort of. All the posts I’ve seen about it show the same three Facebook comments. So at least three Oregonians are outraged. I don’t know about the rest.

But whether it’s true or not, it sure makes a great metaphor. Tabarrok plays it for all it’s worth:

Most of the rest of the America – where people pump their own gas everyday without a second thought – is having a good laugh at Oregon’s expense. But I am not here to laugh because in every state but one where you can pump your own gas you can’t open a barbershop without a license. A license to cut hair! Ridiculous. I hope people in Alabama are laughing at the rest of America. Or how about a license to be a manicurist? Go ahead Connecticut, laugh at the other states while you get your nails done. Buy contact lens without a prescription? You have the right to smirk British Columbia!

All of the Oregonian complaints about non-professionals pumping gas – “only qualified people should perform this service”, “it’s dangerous” and “what about the jobs” – are familiar from every other state, only applied to different services.

Since reading Tabarrok’s post, I’ve been trying to think of more examples of this sort of thing, especially in medicine. There are way too many discrepancies in approved medications between countries to discuss every one of them, but did you know melatonin is banned in most of Europe? (Europeans: did you know melatonin is sold like candy in the United States?) Did you know most European countries have no such thing as “medical school”, but just have college students major in medicine, and then become doctors once they graduate from college? (Europeans: did you know Americans have to major in some random subject in college, and then go to a separate place called “medical school” for four years to even start learning medicine?) Did you know that in Puerto Rico, you can just walk into a pharmacy and get any non-scheduled drug you want without a doctor’s prescription? (source: my father; I have never heard anyone else talk about this, and nobody else even seems to think it is interesting enough to be worth noting).

And I want to mock the people who are doing this the “wrong” way – but can I really be sure? If each of these things decreased the death rate 1%, maybe it would be worth it. But since nobody notices 1% differences in death rates unless they do really good studies, it would just look like some state banning things for no reason, and everyone else laughing at them.

Actually, how sure are we that Oregon was wrong to ban self-service gas stations? How do disabled people pump their gas in most of the country? And is there some kind of negative effect from breathing in gas fumes? I have never looked into any of this.

Maybe the real lesson of Oregon is to demonstrate a sort of adjustment to prevailing conditions. There’s an old saying: “Everyone driving faster than you is a maniac; anyone driving slower than you is a moron”. In the same way, no matter what the current level of regulation is, removing any regulation will feel like inviting catastrophe, and adding any regulation will feel like choking on red tape.

Except it’s broader than regulation. Scientific American recently ran an article on how some far-off tribes barely talk to their children at all. New York Times recently claimed that “in the early 20th century, some doctors considered intellectual stimulation so detrimental to infants that they routinely advised young mothers to avoid it”. And our own age’s prevailing wisdom of “make sure your baby has listened to all Beethoven symphonies by age 3 months or she’ll never get into college” is based on equally flimsy evidence, yet somehow it still feels important to me. If I don’t make my kids listen to Beethoven, it will feel like some risky act of defiance; if I don’t take the early 20th century advice to avoid overstimulating them, it will feel more like I’m dismissing people who have been rightly tossed on the dungheap of history.

And then there’s the discussion from the recent discussion of Madness and Civilization about how 18th century doctors thought hot drinks will destroy masculinity and ruin society. Nothing that’s happened since has really disproved this – indeed, a graph of hot drink consumption, decline of masculinity, and ruinedness of society would probably show a pretty high correlation – it’s just somehow gotten tossed in the bin marked “ridiculous” instead of the bin marked “things we have to worry about”.

So maybe the scary thing about Oregon is how strongly we rely on intuitions about absurdity. If something doesn’t immediately strike us as absurd, then we have to go through the same plodding motions of debate that we do with everything else – and over short time scales, debate is interminable and doesn’t work. Having a notion strike us as absurd short-circuits that and gets the job done – but the Oregon/everyone-else divide shows that intuitions about absurdity are artificial and don’t even survive state borders, let alone genuinely different cultures and value systems.

And maybe this is scarier than usual because I just read Should Schools Ban Kids From Having Best Friends? I assume this is horrendously exaggerated and taken out of context and all the usual things that we’ve learned to expect from news stories, but it got me thinking. Right now enough people are outraged at this idea that I assume it’ll be hard for it to spread too far – and even if it does spread, we can at least feel okay knowing that parents and mentors and other people in society will maintain a belief in friendship and correct kids if schools go wrong. But what if it catches on? What if, twenty years from now, the idea of banning kids from having best friends has stopped generating an intuition of absurdity? Then if we want kids to still be allowed to have best friends, we’re going to have to (God help us) debate it. Have you seen the way our society debates things?

And I know some people see this and say it proves rational debate is useless and we should stop worrying about it. But trusting whatever irrational forces determines what sounds absurd or not doesn’t sound so attractive either. I think about it, and I want to encourage people to be really, really good at rational debate, just in case something terrible loses its protective coating of absurdity, or something absolutely necessary gains it, and our ability to actually judge whether things are good or bad and convince other people of it is all that stands between us and disaster.

And, uh, maybe the people who say kids shouldn’t be allowed to have best friends are right. I admit they’ve thought about this a lot longer than I have. My problem isn’t that someone thinks this. It’s that so much – even the legitimacy of friendship itself – can now depend on our culture’s explicit rationality. And our culture’s explicit rationality is so bad. And that the only alternative to dragging everything before the court of explicit rationality is some version of Chesterton’s Fence, ie the very heuristic telling Oregonians to defend full-service gas stations to the death. There is no royal road.

Maybe this is a good time to get on our chronophones with Oregon (or more prosaically, use the Outside View). Figure out what cognitive strategies you would recommend to an Oregonian trying to evaluate self-service gas stations. Then try to use those same strategies yourself. And try to imagine the level of careful thinking and willingness to question the status quo it would take to make an Oregonian get the right answer here, and be skeptical of any conclusions you’ve arrived at with any less.

Status quo bias can be powerful. A lot of times, you won’t be able to get someone to break their comfortable patterns of thought and behavior unless there’s some kind of crisis – some kind of ideological jolt to their system that serves as a proverbial wake-up call. But that’s why it’s so important to always be exploring weird and exotic new ideas – so that if such a wake-up call ever does occur, you’ll be ready for it. Milton Friedman put it best:

There is enormous inertia – a tyranny of the status quo – in private and especially governmental arrangements. Only a crisis – actual or perceived – produces real change. When that crisis occurs, the actions that are taken depend on the ideas that are lying around. That, I believe, is our basic function: to develop alternatives to existing policies, to keep them alive and available until the politically impossible becomes politically inevitable.

Even if the system seems to be functioning fairly well at the moment, it’s always a good idea to explore alternative possibilities, because there may come a point in the near future when “good enough” turns out to no longer be good enough. Circumstances are always changing, and what works best now may not always be what works best in the future; so we should always be looking for opportunities to improve our ideologies and update them as new information becomes available.

XVIII.

I think one of the biggest reasons why it’s so hard to strike the right balance between trusting expert consensus and not being unduly biased toward the status quo – between trying to improve the system and not getting overly enamored with utopian ideas that promise to make everything perfect – is that there’s always this underlying desire to seek out solutions that are perfect and definitive and final and don’t come with any drawbacks. As commenter NoIAmNumber4 writes:

We have been trained to seek out flawless, mathematically complete solutions that are all upside and no downside. The plethora of police shows is an example of this, but so is politics – vote for me and all your problems will be magically solved.

The problem is that there is very rarely any such thing as a perfect solution, only tradeoffs of moral priorities. […] You [may] want an argument that abortion is either unequivocally good or unequivocally bad, because we have been trained to carry out our political discourse on those terms. But […] that isn’t the case.

The best we can do is decide what values we think are more paramount than others and accept the ethical tradeoff we are making. “Sometimes in the defense of liberty and freedom, people die in terrorist attacks” is a good example.

But this kind of nuanced thinking is difficult, painful and doesn’t fit into a 5 second sound bite – and requires considered thought to accept or defend. Since we are never taught that, especially in school, it is not common.

Our world is getting more and more complex. Decades ago we could have gotten away with avoiding the responsibility. The more complex the world gets, the less true that becomes.

And this is the last major point I want to talk about here. A lot of the ideologies being peddled nowadays like to claim that their way is the only way to ensure absolute freedom for everyone or absolute equality for everyone or what have you. In truth, though, absolutes like these aren’t technically possible, because different freedoms and different interests are often at odds, and they have to be balanced against each other. This isn’t something that ideologues often acknowledge, of course, as Alexander notes:

Politicians don’t think in terms of thresholds. No one ever says “The more regulations we put on businesses, the fewer customers will get scammed by shady con men. But also the more likely it is that we unnecessarily penalize honest businesses. So we need to find the threshold value that minimizes the total unfairness to businesses and customers.” Instead they say either “We need to fight for more regulations and anyone who says otherwise is in the pay of Big Business!” or “We need to cut through all the red tape and anyone who says otherwise is in the pay of Big Government!”

No one ever says “The more restrictions we place on welfare, the more certain we’ll be that no one is abusing the system. But the more restrictions we place on welfare, the more certain we will be that some poor people who desperately need it can’t get it. Therefore, we should determine the relative disutilities of people defrauding us and of needy people not being able to use the system, and act to maximize total utility.” Instead they say “Anyone who opposes tight welfare restrictions is a welfare queen trying to scam you!” or “Anyone who wants any welfare restrictions hates poor people!”

But grey areas and tradeoffs like these almost always exist, and you have to be able to recognize them if you genuinely want to get things right. As Harris writes:

It is clear that we face both practical and conceptual difficulties when seeking to maximize human well-being. Consider, for instance, the tensions between freedom of speech, the right to privacy, and the duty of every government to keep its citizens safe. Each of these principles seems fundamental to a healthy society. The problem, however, is that at their extremes, each is hostile to the other two. Certain forms of speech painfully violate people’s privacy and can even put society itself in danger. Should I be able to film my neighbor through his bedroom window and upload this footage onto YouTube as a work of “journalism”? Should I be free to publish a detailed recipe for synthesizing smallpox? Clearly, appropriate limits to free expression exist. Likewise, too much respect for privacy would make it impossible to gather the news or to prosecute criminals and terrorists. And too zealous a commitment to protecting innocent people can lead to unbearable violations of both privacy and freedom of expression. How should we balance our commitment to these various goods?

Ultimately, coming up with answers to these questions is what deliberation and debate are all about. It may be tempting to indulge in absolutist thinking, and to act like every ideological dispute is totally one-sided and resolving it is just a matter of implementing the one perfect answer that fixes everything. But in reality, it’s almost always a matter of weighing some good things against other good things, and some bad things against other bad things, and just trying to find the best solution you can. This often means that you won’t be able to get 100% of what you want – and neither will your opponents – but being able to engage in that kind of give-and-take is the only way to ensure that you’ll be able to successfully coexist with your fellow human beings in the long run. As Manson writes:

There’s a common saying in the US that “Freedom is not free.” The saying is usually used in reference to the wars fought and won (or lost) to protect the values of the country. It’s a way of reminding people that, hey, this didn’t just magically happen; thousands of people were killed and/or died for us to sit here and sip over-priced mocha frappuccinos and say whatever the fuck we want.

And it’s true.

The idea is that the basic human rights we enjoy – free speech, freedom of religion, freedom of the press – were earned through the sacrifice against some external force, some evil threat.

But people have seemed to conveniently forget that freedom is earned through internal sacrifices as well. Freedom can only exist when you are willing to tolerate views that oppose your own, when you’re willing to give up some of your desires for the sake of a safe and healthy community, when you’re willing to compromise and accept that sometimes things don’t go your way and that’s fine.

In a weird sense, true freedom doesn’t exist. Because the only way for human rights to persist is for everyone to collectively agree to accept that things don’t have to go their way 100% of the time.

But the last couple decades, I fear that people have confused freedom with a lack of discomfort. They have forgotten about that necessary internal struggle.

They want a freedom to express themselves but they don’t want to have to deal with views that may upset or offend them in some way. They want a freedom to enterprise but they don’t want to pay taxes to support the legal machinery that makes it possible. They want a freedom to elect representatives to government but they don’t want to compromise when they’re on the losing side.

A free and functioning democracy demands a populace that is able to sustain discomfort, that is able to tolerate dissatisfaction, that is able to be charitable and forgiving of groups whose views stand in contrast to one’s own, and most importantly, that is able to remain unswayed in the face of some violent threat.

What I fear we’re seeing now is a loss of that ability to handle discomfort and dissatisfaction. We’re seeing a lazy entitlement wash over the world where everyone feels as though they deserve what they want from their government the second they want it, without thought of repercussions or the rest of the population.

Or as one Reddit comment sadly put it recently, “It seems like people don’t actually want democracy anymore, they want a dictator who agrees with them.”

But this constant state of mild dissatisfaction – this is what freedom actually tastes like. And if people continue to lose their ability to stomach it, then I fear one day it will be gone.

When you engage with people who disagree with you, inevitably you’ll encounter viewpoints that baffle or infuriate you. Inevitably you’ll run up against people whose values and interests seem diametrically opposed to your own. Finding a resolution that makes everybody maximally happy isn’t always possible; sometimes you’ll have to subordinate good causes to the service of even better causes. That’s just life. When you do have to balance competing considerations against each other, though, there’s one principle that I think is always good to follow: It’s better to err on the side of kindness and understanding than on the side of bitterness and judgment. It’s better to reward undeserving people a little too much than to refuse to help genuinely needy people out of fear that they might not have done enough to earn it. It’s better to be a little too quick to pardon those who might be guilty than to be a little too quick to punish those who might be innocent. And it’s better to be a little too charitable toward arguments you disagree with than to not be charitable enough. If there’s one thing that you take away from this post, then, I hope that’s it. More kindness and humanity, less resentment and hostility. It may be a cliché, but it’s one that always bears repeating for anyone who seeks truth and knowledge – because after all, seeking truth and knowledge, and treating others with kindness and humanity, are really two sides of the same coin. They’re both rooted in a mindset that, above all else, strives for understanding, in every sense of the word. As far as I’m concerned, then, that’s as good a foundation to build on as any. ∎