Ideas and Ideologies

IIIIIIIVVVIVIIVIIIIXX – XI – XIIXIIIXIVXVXVIXVIIXVIII
[Single-page view]

Not every disagreement is a life-or-death fight for survival – but by framing every conflict as one, it’s more likely to become so. When both sides have convinced themselves that they’re heroes fighting a high-stakes battle against the forces of evil, then when the two sides collide, it’s not just a matter of dispassionately comparing facts and figuring out which side is correct – it’s more like a holy crusade. As Will Storr writes:

Facts do not exist in isolation. They are like single pixels in a person’s generated reality. Each fact is connected to other facts and those facts to networks of other facts still. When they are all knitted together, they take the form of an emotional and dramatic plot at the centre of which lives the individual. When a climate scientist argues with a denier, it is not a matter of data versus data, it is hero narrative versus hero narrative, David versus David, tjukurpa versus tjukurpa. It is a clash of worlds.

[This phenomenon] exposes this strange urge that so many humans have, to force their views aggressively on others. We must make them see things as we do. They must agree, we will make them agree. There is no word for it, as far as I know. ‘Evangelism’ doesn’t do it: it fails to acknowledge its essential violence. We are neural imperialists, seeking to colonise the worlds of others, installing our own private culture of beliefs into their minds. I wonder if this response is triggered when we pick up the infuriating sense that an opponent believes that they are the hero, and not us. The provocation! The personal outrage! The underlying dread, the disturbance in reality. The restless urge to prove that their world, and not ours, is the illusion.

[We have an] inherent desire to slay Goliath, to colonise the mental worlds of others, to win.

This impulse is powerful enough when it’s just two people contending against each other; but when you’ve got two entire groups going head to head, the effect is multiplied exponentially as group loyalty incentives take over. Baumeister discusses how good intentions, when taken to an extreme, can often lead to evil consequences within this group dynamic:

One far-reaching difference between idealistic evil [i.e. evil rooted in ideological motives] and other forms of evil is that idealistic evil is nearly always fostered by groups, as opposed to individuals. When someone kills for the sake of promoting a higher good, he may find support and encouragement if he is acting as part of a group of people who share that belief. If he acts as a lone individual, the same act is likely to brand him as a dangerous nut.

One reason for the importance of groups in idealistic evil is the power of the group to support its high, broad ideals. Abstract conceptions of how things ought to be gain social reality from the mere fact of being shared by a group. Without that group context, they are merely the whims of individuals, and as such they do not justify the use of violent means. To put this more bluntly: It is apparently necessary to have someone else tell you that violent means are justified by high ends. If no one of importance agrees with you, you will probably stop short of resorting to them. But if you belong to a group that shares your passionate convictions and concurs in the belief that force is necessary, you will be much more likely to resort to force. People seem to need others to validate their beliefs and opinions before they put them into practice, especially in a violent and confrontational way.

This is one of the less recognized aspects of the much-discussed experiments done by Stanley Milgram. In those studies, an experimenter instructed an ordinary person (a volunteer) to deliver strong electric shocks to another person, who was actually a confederate posing as an unsuspecting fellow subject. These ordinary people complied with instructions and delivered many severe shocks to the victim, far beyond the predictions and expectations of any of the researchers involved in the project.

As Milgram noted, many of the participants were upset about what they were doing. They showed signs of stress and inner conflict while they were pressing buttons that (supposedly) gave painful and even potentially harmful or lethal shocks to another person. […] Such distress is the normal reaction to hurting others.

Despite their inner distress, however, the vast majority of participants delivered increasingly severe shocks, up to the maximum level possible. A crucial factor was the presence of a fellow human being assuring them that their actions were justified and, indeed, were their duty. They had nothing to gain by inflicting harm, nor did they get any prestige or other advantage from hurting the victim, but their actions did presumably serve the commendable goal of advancing scientific progress. The presence of the experimenter to represent the community of scientific researchers was a central aspect of this experiment. By pressing the button, the subject participated in the group’s worthy enterprise.

The importance of the interpersonal dimension was indicated by the effect of physical distance. In later replications of the study, Milgram varied how close the subject sat to the experimenter as opposed to the victim. Being closer to the victim made the subject less willing to deliver hurtful shocks. Being closer to the experimenter (the authority figure) made subjects more willing.

In most cases, of course, such extreme acts are committed by devoted members of the group, rather than by temporary recruits. Thus, they share the group’s beliefs and ideals and are presumably willing to do what will further the positive goals of the group. The group is an important source of moral authority. Individual acts may be questioned, which usually means questioning them in terms of how well they fit into the recognized goals and procedures of the group. But the group itself is above question.

This pattern of deferring to the group’s moral authority is seen over and over again in violent groups. Consider again the Khmer Rouge. Like many Communist parties, it was a firm believer in the practice of self-criticism by individual members. But this meant examining one’s own acts (and thoughts or feelings) to see whether they corresponded to the proper party line. Criticism of the party itself was strictly off-limits.

Criticism sessions in Western Communist groups showed the same pattern. Individuals sat around and scrutinized themselves to see how they fit or failed to fit the official party line, but they never questioned the party line. When the party adopted a new position, individual members scrambled to agree with it and to convince themselves that they had believed this all along. Arthur Koestler cynically described the process from his days as a Communist: “We groped painfully in our minds not only to find justifications for the line laid down, but also to find traces of former thoughts which would prove to ourselves that we had always held the required opinion. In this operation we mostly succeeded.” Whether one looks at religious warriors, members of Fascist or Communist groups, or modern members of street gangs, one finds the same pattern: The group is regarded as above reproach. The members of the group may sometimes think rather poorly of one another, but the group as a whole is seen as supremely good.

Why do groups seem to have this effect? Although several factors contribute, it is necessary to begin with the fundamental appeal of groups. Probably this appeal is deeply rooted in human nature. The human tendency to seek a few close social bonds to other people is universal, and nearly everyone belongs to some sort of group, whether a family or a mass movement. People who lack close social ties are generally unhappy, unhealthy, and more vulnerable than other people to stress and other problems. Some theorists have argued that the tendency to form small groups is the most important adaptation in human evolution, ranking even above intelligence, and so natural selection has shaped human nature to need to belong to groups.

The need to belong may be universal, but it is not always equally strong. One factor that seems especially to intensify the need is competition with other groups. Thus, one could debate the evolutionary benefits of belonging to a group, noting that the advantages of sharing others’ resources could be offset by the pressure to share one’s own resources with them. There is no doubt, however, about the competitive disadvantage of not belonging to a group when there are other groups. If there is some scarce resource such as food that a group wants and a lone individual also wants, the group is almost sure to get it. Thus, the need to bond with other people may be stimulated by the presence of a rival or enemy group.

This tendency toward intergroup competition fits well with what we have already seen. The words Devil and Satan are derived from words meaning “adversary” and “opponent,” which fits the view that rivalry or antagonism is central to the basic, original understanding of evil. Evil is located in the group that opposes one’s own group. The survival of one’s own group is seen as the ultimate good, and it may require violent acts against the enemy group.

[…]

The tendency toward intergroup competition sheds light on one aspect of what some researchers have called the discontinuity effect, that is, the pattern by which a group tends to be more extreme than the sum of its individual members. In particular, higher levels of aggression and violence are associated with group encounters than with individual encounters. People generally expect that a meeting between two individuals will be amiable, and that even if they have different goals or backgrounds they may find some way to compromise and agree. In contrast, people expect that a meeting between two groups will be less amiable and less likely to proceed smoothly to compromise. Laboratory studies support these expectations and indicate that groups tend to be more antagonistic, competitive, and mutually exploitive than individuals. In fact, the crucial factor seems to be the perception that the other side is a group. An individual will adopt a more antagonistic stance when dealing with a group than when dealing with another individual.

Probably the easiest way to understand this difference is to try a simple thought experiment. Imagine a white man and a black man encountering each other across a table in a meeting room, one on one, to discuss some area of disagreement. Despite the racial antagonism that is widely recognized in the United States today, the meeting is likely to proceed in a reasonably friendly fashion, with both men looking for some way to resolve the dispute. Now imagine a group of four white men meeting a group of four black men in the same room. Intuition confirms the research findings: The group dispute will be harder to resolve.

There is nothing sinister or wrong with wanting to belong to a group, of course. Groups may perpetrate evil, but they can also accomplish considerable good (and without doing any harm in the process). Groups can accomplish positive, virtuous things that go beyond what individuals can do. Groups do provide a moral authority, however, that can give individuals sufficient justification to perform wicked actions. Moreover, when groups confront each other, it is common for the confrontation to degenerate into an antagonistic and potentially hostile encounter. In these ways, the existence of a group can promote evil and violence.

When you’re part of a group, there’s a powerful incentive to signal your commitment to the cause by promoting the group’s ideology more aggressively than everyone else around you. Unfortunately, as Wong mentioned before, everyone else around you shares the same incentives – so the more aggressively each member of the group pushes, the more aggressively the other members of the group must push as well if they want to keep up. The result is that movements which start off promoting relatively reasonable goals can quickly spiral out of control and end up promoting dangerously radical ones. Adam Gopnik writes:

Reformers are famously prey to the fanaticism of reform. A sense of indignation and a good cause lead first to moral urgency, and then soon afterward to repetition, whereby the reformers become captive to their own rhetoric, usually at a cost to their cause. Crusaders against widespread alcoholism (as acute a problem in 1910 as the opioid epidemic is today) advanced to the folly of Prohibition, which created a set of organized-crime institutions whose effects have scarcely just passed. Progressive Era trade unionists, fending off corporate thugs, could steer into thuggish forms of Stalinism. Those with the moral courage to protest the Vietnam War sometimes became blinded to the reality of the North Vietnamese government – and on and on. It seems fair to say that a readiness to amend and reconsider the case being made is exactly what separates a genuine reforming instinct from a merely self-righteous one.

Social scientists actually have a term for this kind of phenomenon, as Brennan notes:

[Group] deliberation tends to move people toward more extreme versions of their ideologies rather than toward more moderate versions. Legal theorist Cass Sunstein calls this the “Law of Group Polarization.”

And the perverse consequence of this law, Brennan adds, is that being part of a group “often causes [individual members] to choose positions inconsistent with their own views – positions that [they] ‘later regret.’”

This is a particularly important point. Here’s Baumeister again:

A final way in which groups contribute to the escalation of violence emerges from the discrepancy between what the members of the group say and what they privately believe. The group seems to operate based to what the members of the group say to one another. It may often happen that the members harbor private doubts about what the group is doing, but they refuse to say them, and the group proceeds as if the doubts did not exist.

Social psychologists have known for several decades that a group is more than the sum or average of its members. Influential early research showed that groups sometimes make decisions that are riskier than what the average private opinion of the group members favor. More relevant recent work has shown that groups tend to communicate and make decisions based on what the members have in common, which may differ substantially from what the individual members think. In a remarkable series of studies, psychologists Garold Stasser and William Titus showed that groups sometimes make poor decisions even when they have sufficient information to do better. In one study, each group was supposed to decide which of two candidates to hire. Each member of the group came to the meeting armed with preliminary information about the candidates. The researchers provided more information favoring one candidate, Anderson, but they scattered it through the group. In contrast, the smaller amount of information favoring the other candidate, Baker, was concentrated so everyone knew it. Had the group really pooled their information, they would have discovered that the totality pointed clearly toward hiring Anderson. But instead of doing this, group after group merely talked about what they all knew in common, which was the information favorable to Baker. As a result, group after group chose Baker.

We have already seem how groups involved in evil will suppress doubts and dissent. [Our earlier discussion] quoted some of the people who worked in the Stalinist terror. These individuals said they privately doubted the propriety of what they were doing, but whenever anyone would begin to speak about such doubts, the others would silence him by insisting on the party line.

The Terror following the French Revolution showed how cruelty can escalate as a result of the pattern in which private doubts are kept secret and public statements express zeal and fervor. The Terror was directed mainly at the apparent enemies of the Revolution, and the Revolutionary government was constantly obsessed with internal enemies who presumably sought to betray it. Hence, those at the center of government were paradoxically its most likely victims. To criticize the Revolution of even to question its repressive measures was to invite suspicion of oneself. Accordingly, the members of the tribunal and others began to try to outdo each other in making strong statements about the need for harsh measures, because only such statements could keep them safe from the potential accusation of lacking the proper attitudes. The discussions and decisions featured mainly the most violent and extreme views, and the degree of brutality escalated steadily. Ironically, the leaders’ fear of one another caused them to become ever more violent, even draconian, with the result that they all really did have more and more to fear. And one by one, most of them were killed by the Revolution over which they were presiding.

Many people will sympathize with victims or question whether their own side’s most violent actions are morally right, but they will also feel ashamed of these doubts. What is said in the group, and what is likely to dictate the group’s actions, will be the most extreme and virulent sentiments. Whatever their private feelings, the members may express only the politically correct views of strong hatred of the enemy. In such an environment, the group’s actions may reflect a hatred that is more intense than any of its members actually feel. The group will be more violent than the people in it. Given all the other processes that foster escalation, it may not even be necessary for groups to have this effect forever. Once the members of the group are waist-deep in blood, it is too late for them to question the group’s project as a while, and so they are all the more likely to wade in even deeper.

[…]

Self-deception is an iffy business. People cannot convince themselves of anything that they might want to believe. In fact, the margin for self-deception is often rather small: People can only stretch the facts and the evidence to a limited degree. Groups, however, have several advantages in this regard, because they can support each other’s beliefs. When one is surrounded by people who all believe the same thing, any contrary belief gradually seems less and less plausible. It is not hard for us as outsiders to knock holes in the self-justifying reasoning of perpetrators, but such an exercise is misleading. Perpetrators often find themselves in groups where no one would think to raise objections and everyone would agree with even flimsy arguments that support their side.

Great evil can be perpetrated by small groups when the members strive to think alike and support the prevailing views. Unfortunately, power tends to produce just such situations, because the most powerful men (and presumably women) tend to surround themselves with like-minded associates, who become reluctant to challenge the prevailing views. Whether the group is a small religious cult, a set of corporate executives, or a ruling clique in a large country, the justifications expressed by everyone in the group will tend to gain force.

The communication patterns increase this force. Committees and other small groups tend to focus on what everyone believes and knows in common. Private opinions and extraneous facts are kept to oneself. […] Many individuals might have doubts and qualms but are reluctant to express them, so that everything said publicly in the group conforms to the party line. When the moral acceptability of some violent action is at issue, everyone keeps silent about his reservations and objections, and everyone repeats the overt justifications and rationalizations. As a result, everyone gets the impression that everyone else believes those justifications and that his own doubts are an anomaly. One may even feel guilty about having doubts; everyone else seems so certain.

One of the most remarkable phenomena of the twentieth century is the speed with which countries have abandoned their totalitarian beliefs, despite having advocated them with apparently minimal dissent for long periods of time or at great cost. All the Germans seemed to be behind Hitler, but immediately after the war, the Allies could find hardly anyone who professed to have sincerely believed in the Nazi world view. Likewise, the nations of Eastern Europe apparently supported their Communist governments with little criticism or dissent, and then in 1989 they abruptly abandoned Communism wholesale and embraced an entirely different approach to politics and economics.

Such rapid and radical conversions begin to make sense if one accepts the view of self-deception we have developed here. People want to believe what the government tells them. They want to believe that what their society is doing is the right thing. To help themselves believe, they suspend criticism and questioning, and they go along with others in expressing their preferred views. But when circumstances discredit the ruling view, they suddenly acknowledge all the problems and fallacies they had avoided, and they can say with reasonable honesty that they did not sincerely believe it after all. Their desire to believe makes a great deal of difference when the facts are ambiguous.

Pinker delves more deeply into the social and psychological mechanisms driving this type of behavior:

Why do people so often impersonate sheep? It’s not that conformity is inherently irrational. Many heads are better than one, and it’s usually wiser to trust the hard-won wisdom of millions of people in one’s culture than to think that one is a genius who can figure everything out from scratch. Also, conformity can be a virtue in what game theorists call coordination games, where individuals have no rational reason to choose a particular option other than the fact that everyone else has chosen it. Driving on the right or the left side of the road is a classic example: here is a case in which you really don’t want to march to the beat of a different drummer. Paper currency, Internet protocols, and the language of one’s community are other examples.

But sometimes the advantage of conformity to each individual can lead to pathologies in the group as a whole. A famous example is the way an early technological standard can gain a toehold among a critical mass of users, who use it because so many other people are using it, and thereby lock out superior competitors. According to some theories, these “network externalities” explain the success of English spelling, the QWERTY keyboard, VHS videocassettes, and Microsoft software (though there are doubters in each case). Another example is the unpredictable fortunes of bestsellers, fashions, top-forty singles, and Hollywood blockbusters. The mathematician Duncan Watts set up two versions of a Web site in which users could download garage-band rock music. In one version users could not see how many times a song had already been downloaded. The differences in popularity among songs were slight, and they tended to be stable from one run of the study to another. But in the other version people could see how popular a song had been. These users tended to download the popular songs, making them more popular still, in a runaway positive feedback loop. The amplification of small initial differences led to large chasms between a few smash hits and many duds – and the hits and duds often changed places when the study was rerun.

Whether you call it herd behavior, the cultural echo chamber, the rich get richer, or the Matthew Effect, our tendency to go with the crowd can lead to an outcome that is collectively undesirable. But the cultural products in these examples – buggy software, mediocre novels, 1970s fashion – are fairly innocuous. Can the propagation of conformity through social networks actually lead people to sign on to ideologies they don’t find compelling and carry out acts they think are downright wrong? Ever since the rise of Hitler, a debate has raged between two positions that seem equally unacceptable: that Hitler single-handedly duped an innocent nation, and that the Germans would have carried out the Holocaust without him. Careful analyses of social dynamics show that neither explanation is exactly right, but that it’s easier for a fanatical ideology to take over a population than common sense would allow.

There is a maddening phenomenon of social dynamics variously called pluralistic ignorance, the spiral of silence, and the Abilene paradox, after an anecdote in which a Texan family takes an unpleasant trip to Abilene one hot afternoon because each member thinks the others want to go. People may endorse a practice or opinion they deplore because they mistakenly think that everyone else favors it. A classic example is the value that college students place on drinking till they puke. In many surveys it turns out that every student, questioned privately, thinks that binge drinking is a terrible idea, but each is convinced that his peers think it’s cool. Other surveys have suggested that gay-bashing by young toughs, racial segregation in the American South, honor killings of unchaste women in Islamic societies, and tolerance of the terrorist group ETA among Basque citizens of France and Spain may owe their longevity to spirals of silence. The supporters of each of these forms of group violence did not think it was a good idea so much as they thought that everyone else thought it was a good idea.

Can pluralistic ignorance explain how extreme ideologies may take root among people who ought to know better? Social psychologists have long known that it can happen with simple judgments of fact. In another hall-of-fame experiment, Solomon Asch placed his participants in a dilemma right out of the movie Gaslight. Seated around a table with seven other participants (as usual, stooges), they were asked to indicate which of three very different lines had the same length as a target line, an easy call. The six stooges who answered before the participant each gave a patently wrong answer. When their turn came, three-quarters of the real participants defied their own eyeballs and went with the crowd.

But it takes more than the public endorsement of a private falsehood to set off the madness of crowds. Pluralistic ignorance is a house of cards. As the story of the Emperor’s New Clothes makes clear, all it takes is one little boy to break the spiral of silence, and a false consensus will implode. Once the emperor’s nakedness became common knowledge, pluralistic ignorance was no longer possible. The sociologist Michael Macy suggests that for pluralistic ignorance to be robust against little boys and other truth-tellers, it needs an additional ingredient: enforcement. People not only avow a preposterous belief that they think everyone else avows, but they punish those who fail to avow it, largely out of the belief – also false – that everyone else wants it enforced. Macy and his colleagues speculate that false conformity and false enforcement can reinforce each other, creating a vicious circle that can entrap a population into an ideology that few of them accept individually.

Why would someone punish a heretic who disavows a belief that the person himself or herself rejects? Macy et al. speculate that it’s to prove their sincerity – to show other enforcers that they are not endorsing a party line out of expedience but believe it in their hearts. That shields them from punishments by their fellows – who may, paradoxically, only be punishing heretics out of fear that they will be punished if they don’t.

The suggestion that unsupportable ideologies can levitate in midair by vicious circles of punishment of those who fail to punish has some history behind it. During witch hunts and purges, people get caught up in cycles of preemptive denunciation. Everyone tries to out a hidden heretic before the heretic outs him. Signs of heartfelt conviction become a precious commodity. Solzhenitsyn recounted a party conference in Moscow that ended with a tribute to Stalin. Everyone stood and clapped wildly for three minutes, then four, then five . . . and then no one dared to be the first to stop. After eleven minutes of increasingly stinging palms, a factory director on the platform finally sat down, followed by the rest of the grateful assembly. He was arrested that evening and sent to the gulag for ten years. People in totalitarian regimes have to cultivate thoroughgoing thought control lest their true feelings betray them. Jung Chang, a former Red Guard and then a historian and memoirist of life under Mao, wrote that on seeing a poster that praised Mao’s mother for giving money to the poor, she found herself quashing the heretical thought that the great leader’s parents had been rich peasants, the kind of people now denounced as class enemies. Years later, when she heard a public announcement that Mao had died, she had to muster every ounce of thespian ability to pretend to cry.

To show that a spiral of insincere enforcement can ensconce an unpopular belief, Macy, together with his collaborators Damon Centola and Robb Willer, first had to show that the theory was not just plausible but mathematically sound. It’s easy to prove that pluralistic ignorance, once it is in place, is a stable equilibrium, because no one has an incentive to be the only deviant in a population of enforcers. The trick is to show how a society can get there from here. Hans Christian Andersen had his readers suspend disbelief in his whimsical premise that an emperor could be hoodwinked into parading around naked; Asch paid his stooges to lie. But how could a false consensus entrench itself in a more realistic world?

The three sociologists simulated a little society in a computer consisting of two kinds of agents. There were true believers, who always comply with a norm and denounce noncompliant neighbors if they grow too numerous. And there were private but pusillanimous skeptics, who comply with a norm if a few of their neighbors are enforcing it, and enforce the norm themselves if a lot of their neighbors are enforcing it. If these skeptics aren’t bullied into conforming, they can go the other way and enforce skepticism among their conforming neighbors. Macy and his collaborators found that unpopular norms can become entrenched in some, but not all, patterns of social connectedness. If the true believers are scattered throughout the population and everyone can interact with everyone else, the population is immune to being taken over by an unpopular belief. But if the true believers are clustered within a neighborhood, they can enforce the norm among their more skeptical neighbors, who, overestimating the degree of compliance around them and eager to prove that they do not deserve to be sanctioned, enforce the norm against each other and against their neighbors. This can set off cascades of false compliance and false enforcement that saturate the entire society.

The analogy to real societies is not far-fetched. James Payne documented a common sequence in the takeover of Germany, Italy, and Japan by fascist ideologies in the 20th century. In each case a small group of fanatics embraced a “naïve, vigorous ideology that justifies extreme measures, including violence,” recruited gangs of thugs willing to carry out the violence, and intimidated growing segments of the rest of the populations into acquiescence.

Macy and his collaborators played with another phenomenon that was first discovered by Milgram: the fact that every member of a large population is connected to everyone else by a short chain of mutual acquaintances – six degrees of separation, according to the popular meme. They laced their virtual society with a few random long-distance connections, which allowed agents to be in touch with other agents with fewer degrees of separation. Agents could thereby sample the compliance of agents in other neighborhoods, disabuse themselves of a false consensus, and resist the pressure to comply or enforce. The opening up of neighborhoods by long-distance channels dissipated the enforcement of the fanatics and prevented them from intimidating enough conformists into setting off a wave that could swamp the society. One is tempted toward the moral that open societies with freedom of speech and movement and well-developed channels of communication are less likely to fall under the sway of delusional ideologies.

Macy, Willer, and Ko Kuwabara then wanted to show the false-consensus effect in real people – that is, to see if people could be cowed into criticizing other people whom they actually agreed with if they feared that everyone else would look down on them for expressing their true beliefs. The sociologists mischievously chose two domains where they suspected that opinions are shaped more by a terror of appearing unsophisticated than by standards of objective merit: wine-tasting and academic scholarship.

In the wine-tasting study, Macy et al. first whipped their participants into a self-conscious lather by telling them they were part of a group that had been selected for its sophistication in appreciating fine art. The group would now take part in the “centuries-old tradition” (in fact, concocted by the experimenters) called a Dutch Round. A circle of wine enthusiasts first evaluate a set of wines, and then evaluate one another’s wine-judging abilities. Each participant was given three cups of wine and asked to grade them on bouquet, flavor, aftertaste, robustness, and overall quality. In fact, the three cups had been poured from the same bottle, and one was spiked with vinegar. As in the Asch experiment, the participants, before being asked for their own judgments, witnessed the judgments of four stooges, who rated the vinegary sample higher than one of the unadulterated samples, and rated the other one best of all. Not surprisingly, about half the participants defied their own taste buds and went with the consensus.

Then a sixth participant, also a stooge, rated the wines accurately. Now it was time for the participants to evaluate one another, which some did confidentially and others did publicly. The participants who gave their ratings confidentially respected the accuracy of the honest stooge and gave him high marks, even if they themselves had been browbeaten into conforming. But those who had to offer their ratings publicly compounded their hypocrisy by downgrading the honest rater.

The experiment on academic writing was similar, but with an additional measure at the end. The participants, all undergraduates, were told they had been selected as part of an elite group of promising scholars. They had been assembled, they learned, to take part in the venerable tradition called the Bloomsbury Literary Roundtable, in which readers publicly evaluate a text and then evaluate each other’s evaluation skills. They were given a short passage to read by Robert Nelson, Ph.D., a MacArthur “genius grant” recipient and Albert W. Newcombe Professor of Philosophy at Harvard University. (There is no such professor or professorship.) The passage, called “Differential Topology and Homology,” had been excerpted from Alan Sokal’s “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity.” The essay was in fact the centerpiece of the famous Sokal Hoax, in which the physicist had written a mass of gobbledygook and, confirming his worst suspicions about scholarly standards in the postmodernist humanities, got it published in the prestigious journal Social Text.

The participants, to their credit, were not impressed by the essay when they rated it in private. But when they rated it in public after seeing four stooges give it glowing evaluations, they gave it high evaluations too. And when they then rated their fellow raters, including an honest sixth one who gave the essay the low rating it deserved, they gave him high marks in private but low marks in public. Once again the sociologists had demonstrated that people not only endorse an opinion they do not hold if they mistakenly believe everyone else holds it, but they falsely condemn someone else who fails to endorse the opinion. The extra step in this experiment was that Macy et al. got a new group of participants to rate whether the first batch of participants had sincerely believed that the nonsensical essay was good. The new raters judged that the ones who condemned the honest rater were more sincere in their misguided belief than the ones who chose not to condemn him. It confirms Macy’s suspicion that enforcement of a belief is perceived as a sign of sincerity, which in turn supports the idea that people enforce beliefs they don’t personally hold to make themselves look sincere. And that, in turn, supports their model of pluralistic ignorance, in which a society can be taken over by a belief system that the majority of its members do not hold individually.

If you’ve ever wondered what in the world could possibly have driven entire populations of ordinary people to abandon their humanity and become Nazis or Stalinists, or to commit acts of the most heinous violence on a mass scale, here’s your answer. Given the right incentives, even the most unassuming individuals can become part of a group that is crueler and more vindictive than any one of its particular members. And as bad as this effect is when it’s driven by forces like peer pressure and social incentives, it’s even worse when the participants genuinely believe that their enemies are so evil that violent action must be taken in order to stop them. Here’s another excerpt from Baumeister:

There are important implications of idealistic evil for the victims. Idealistic perpetrators believe they have a license, even a duty, to hate. They perceive the victim in terms of the myth of pure evil: as fundamentally opposed to the good, for no valid reason or even for the sheer joy of evil.

One implication is that ordinary restraints that apply even to severe conflicts may be waived. Holy wars tend to be more brutal and merciless than ordinary wars, and the reason for this is now apparent. When fighting against the forces of evil, there is no reason to expect a fair fight – and hence no reason to fight fair oneself. Idealists think they are up against a dangerous and powerful opponent who will stop at nothing to spread evil in the world, and so desperate and extreme measures are appropriate.

In a sense, this solves the problem of how ends justify means. If you are up against Satan, you should not expect the ordinary rules to apply. Murder may be acceptable if you are killing the most wicked and demonic enemies of the good; indeed, the state does that by executing the worst criminals and traitors. And the Bible is full of examples of how killing was all right when done in God’s name and in the service of divinely sanctioned causes. After all, it is only because of broad conceptions of goodness that murder is seen as wrong. If Satan is your enemy, you know that the fight is not going to be conducted in line with those notions of goodness. Satan cannot be expected to obey Christian morals and similar rules.

Another implication is that the victim’s options are slim. In instrumental evil, the victim can get off relatively easily by conceding whatever it is that the perpetrator wants. In idealistic evil, however, what the perpetrator often wants is that the victim be dead. The victim’s suffering is not one of many means to an end, but an essential condition for the (ostensible) triumph of good, and that leaves the victim with much less latitude to make a deal.

[…]

A key to understanding this link between idealism and violence is that high moral principles reduce the room for compromise. If two countries are fighting over disputed territory and neither can achieve a clear victory on the field, they may well make some kind of deal to divide the land in question between them. But it is much harder to make a deal with the forces of evil or to find some compromise in matters of absolute, eternal truth. You can’t sell half your soul to the devil.

This refusal to compromise is evident in the same examples we have discussed. The Thirty Years’ War was one of the most miserably ruinous wars ever fought, especially if one adjusts its devastation to account for the relatively primitive weapons in use. On several occasions, the war-weary sides were both ready to negotiate an end to it, but ideological commitments to one or the other version of Christianity scuttled the deal and sent everyone back to the battlefield.

Pinker reiterates this point:

Names like the “Thirty Years’ War” and the “Eighty Years’ War” […] tell us that the Wars of Religion were not just intense but interminable. The historian of diplomacy Garrett Mattingly notes that in this period a major mechanism for ending war was disabled: “As religious issues came to dominate political ones, any negotiations with the enemies of one state looked more and more like heresy and treason. The questions which divided Catholics from Protestants had ceased to be negotiable. Consequently … diplomatic contacts diminished.”

[…]

Ideologies, whether religious or political, push wars out along the tail of the deadliness distribution because they inflame leaders into trying to outlast their adversaries in destructive wars of attrition, regardless of the human costs.

And this is probably the biggest reason of all why ideological violence should be avoided as strenuously as possible. When you’ve got two sides who perceive themselves as fighting not just for practical goals, but for sacred values that can never be compromised, then the moment one side starts throwing punches, the severity of the conflict has nowhere to go but up – and the possibility of peaceful resolution disappears. As ContraPoints puts it:

Politics is based on a norm of reciprocity, which means I treat my opponents the way I expect them to treat me. If I start punching people out to silence that political speech, how can I reasonably expect them not to do the same to me?

And as Eneasz Brodski adds:

The reason I am against [even a white supremacist] getting punched, and everyone being so happy about it, is entirely just the rule of law. The thing that protects us from random people coming up and punching us because they disagree with our views is the fact that extrajudicial violence is not tolerated in our society. The reason that their Nazi groups don’t get to riot in the street and punch people is because we don’t accept that sort of thing. And once society does start accepting extrajudicial violence, that is when we get things like the deep South lynchings and those sorts of atrocities, because there is no protection anymore for the people who aren’t on the right side of the mob.

He continues:

There will always be crazy fuckers with awful ideas. You discredit them, and you rely on the laws to protect us from their violence. The law is what holds them back. It’s when the law fails to do so that things are dangerous (see: the South, up until just a few decades ago). That’s why I become worried when people gleefully cheer at the failure of the law to protect people from violence. If you think beating someone in the street will effectively discredit them and keep public opinion on your side, well, I think that’s a bad way to influence public opinion.

Again, there are some people who argue that they have no choice but to resort to violence, claiming that because their opponents hold beliefs that are inherently pro-violence, the very act of holding those beliefs is the equivalent of committing literal violence against them, so they are therefore justified in defending themselves by force. In rare cases, something like that actually can be true – if a Klansman or a jihadist, for instance, is standing in the middle of the town square and shouting that it’s time to start exterminating the Jews, so everybody grab your guns and let’s start hunting down every Jew in town – then yes, obviously that’s not a case where you’re going to be able to resolve it through calm, respectful dialogue (unless you happen to be a master negotiator). But in such cases, where someone’s speech really is extreme enough to incite others to commit violence, then that speech constitutes a crime and your response, for God’s sake, should be to call the police, not to blast the person on Twitter or try to smack them around a little bit yourself. If you really believe that what they’re saying poses a legitimate threat to the safety of others, then your duty is to alert the authorities so that they can stop whatever terrible act of violence is about to be committed. Otherwise – if, for instance, your opponents are just making resentful noises about how they wish somebody would put you out of their misery – then that’s deeply troubling, to be sure, but it’s not the equivalent of committing literal acts of violence against your side, and it doesn’t justify committing violent acts of your own in “self-defense” (especially considering the fact that most of the people on your side have probably made plenty of similar comments themselves, directed in the opposite direction). Your enemies’ ideas may be bad – so bad that they could lead to dangerous consequences at some point in the future – but if that’s the case, the point remains that they haven’t crossed the line into literal violence yet. We’re still talking about ideas here; and the way to handle ideas is through reasoned dialogue, not through force.

Continued on next page →