Ideas and Ideologies

IIIIIIIVVVIVIIVIIIIXXXIXIIXIII – XIV – XVXVIXVIIXVIII
[Single-page view]

Within the kind of absolutist mindset that dominates today’s discourse, people often become so obsessed with trying to root out every miniscule offense against their side that they reflexively make snap judgments against anyone and anything that shows even a whiff of impurity. But as Storr points out:

[There’s a] kind of binary, dismissive thinking that I worry is evident among some Skeptics. In their haste to dismiss the ranting [zealots and quacks], subtler truths are being missed. […] Just because [one’s opponents] are wrong about one thing, it hasn’t necessarily followed that they are wrong about it all. And yet they are crucified for making one mistake.

T1J gives one example:

[It’s a problem] when you use the supposed failure of a movement as evidence that their entire cause should be discredited. During the African-American civil rights movement, the Nation of Islam was a corrupt black supremacist movement. But just because that group sucked didn’t change the fact that the problem they were fighting against existed.

And Alexander provides another example:

This is the same pattern we see in Israel and Palestine. How many times have you seen a news story like this one: “Israeli speaker hounded off college campus by pro-Palestinian partisans throwing fruit. Look at the intellectual bankruptcy of the pro-Palestinian cause!” It’s clearly intended as an argument for something other than just not throwing fruit at people. The causation seems to go something like “These particular partisans are violating the usual norms of civil discussion, therefore they are bad, therefore something associated with Palestine is bad, therefore your General Factor of Pro-Israeliness should become more strongly positive, therefore it’s okay for Israel to bomb Gaza.” Not usually said in those exact words, but the thread can be traced.

This kind of thinking goes back to the whole semantic net thing discussed at the very beginning of this post; rather than considering an isolated idea on its own merits, the more natural impulse is to judge it based on the entire bundle of other things that are also associated with that particular idea. It’s a kind of guilt-by-association approach that allows you to reject opposing ideas not just on an individual one-by-one basis, but on a wholesale basis, all at once. If your opponent is wrong about enough key points (or wrong in their tactics), then that’s enough to lump them into the mental category of “someone who’s wrong about things in general,” and not have to bother with anything else they have to say. (This also relates back to the inoculation effect mentioned before.)

Still though, as the old saying goes, even a broken clock is right twice a day. Or as Robert Pirsig puts it: “The world’s greatest fool may say the Sun is shining, but that doesn’t make it dark out.”

If you think about it purely in statistical terms, how likely can it really be that your opponents are all 100% wrong about everything 100% of the time, while you’re 100% right about everything 100% of the time? If you’ve had even the slightest bit of experience dealing with ideas before, the answer should be self-evident – after all, you’ve been mistaken so many times that obviously you can’t be 100% infallible; your opponents must at least have some things that they’re right about. But this concept is easier to accept in theory than in practice. Here’s Storr again:

I consider – as everyone surely does – that my opinions are the correct ones. And yet, I have never met anyone whose every single thought I agreed with. When you take these two positions together, they become a way of saying, ‘Nobody is as right about as many things as me.’ And that cannot be true. Because to accept that would be to confer upon myself a Godlike status. It would mean that I possess a superpower: a clarity of thought that is unique among humans. Okay, fine. So I accept that I am wrong about things – I must be wrong about them. A lot of them. But when I look back over my shoulder and I double-check what I think about religion and politics and science and all the rest of it… well, I know that I am right about that… and that… and that and that and – it is usually at this point that I start to feel strange. I know that I am not right about everything and yet I am simultaneously convinced that I am. I believe these two things completely, and yet they are in catastrophic logical opposition of each other.

If you allow yourself to get too caught up in the mistaken impression that you must be right about everything, it can lead you into logical traps. This is basically where the phenomenon of closed-mindedness comes from, as Tavris and Aronson point out:

[This mentality] creates a logical labyrinth because it presupposes two things: One, people who are open-minded and fair ought to agree with a reasonable opinion. And two, any opinion I hold must be reasonable; if it weren’t, I wouldn’t hold it. Therefore, if I can just get my opponents to sit down here and listen to me, so I can tell them how things really are, they will agree with me. And if they don’t, it must be because they are biased.

To quote yet another old saying, though:

It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.

The truth is, most people overestimate how correct they are about most things. As Alexander writes:

Nearly everyone is very very very overconfident. We know this from experiments where people answer true/false trivia questions, then are asked to state how confident they are in their answer. If people’s confidence was well-calibrated, someone who said they were 99% confident (ie only 1% chance they’re wrong) would get the question wrong only 1% of the time. In fact, people who say they are 99% confident get the question wrong about 20% of the time.

It gets worse. People who say there’s only a 1 in 100,000 chance they’re wrong? Wrong 15% of the time. One in a million? Wrong 5% of the time. They’re not just overconfident, they are fifty thousand times as confident as they should be.

This is not just a methodological issue. Test confidence in some other clever way, and you get the same picture. For example, one experiment asked people how many numbers there were in the Boston phone book. They were instructed to set a range, such that the true number would be in their range 98% of the time (ie they would only be wrong 2% of the time). In fact, they were wrong 40% of the time. Twenty times too confident! What do you want to bet that if they’d been asked for a range so wide there was only a one in a million chance they’d be wrong, at least five percent of them would have bungled it?

The problem, of course, is that although it may be easy to admit in principle that you can’t be right about everything, there’s no way of knowing which beliefs are actually the mistaken ones. From the inside, it feels like they’re all perfectly correct. So what can be done about this? One approach recommended by Alexander is to stop putting so much confidence in your “view from the inside” in the first place, and instead take a more meta-oriented approach, in which you not only consider the ideas themselves, but also your own certainty levels, from an outside perspective:

The Inside View is when you weigh the evidence around something, and go with whatever side’s evidence seems most compelling. The Outside View is when you notice that you feel like you’re right, but most people in the same situation as you are wrong. So you reject your intuitive feelings of rightness and assume you are probably wrong too. [An example] to demonstrate:

[…]

I feel like I’m an above-average driver. But I know there are surveys saying everyone believes they’re above-average drivers. Since most people who believe they’re an above-average driver are wrong, I reject my intuitive feelings and assume I’m probably just an average driver.

Applying this principle to ideological matters, he continues:

Every so often, I talk to people about politics and the necessity to see things from both sides. I remind people that our understanding of the world is shaped by tribalism, the media is often biased, and most people have an incredibly skewed view of the world. They nod their heads and agree with all of this and say it’s a big problem. Then I get to the punch line – that means they should be less certain about their own politics, and try to read sources from the other side. They shake their head, and say “I know that’s true of most people, but I get my facts from Vox, which backs everything up with real statistics and studies.” Then I facepalm so hard I give myself a concussion. This is the same situation where a tiny dose of Meta-Outside-View could have saved them.

Finally, he adds:

I started off by [writing] about “the principle of charity”, but I had trouble defining it and in retrospect I’m not that good at it anyway. What can be salvaged from such a concept? I would say “behave the way you would if you were less than insanely overconfident about most of your beliefs.” This is the Way. The rest is just commentary.

Cowen also has an interesting way of thinking about this. He points out that even if you have the most well-founded combination of beliefs possible, you’re still more likely than not to be wrong on some points, just as a matter of statistics:

We should be skeptical of ideologues who claim to know all of the relevant paths to making ours a better world. How can we be sure that a favored ideology will in fact bring about good consequences? Given the radical uncertainty of the more distant future, we can’t know how to achieve preferred goals with any kind of certainty over longer time horizons. Our attachment to particular means should therefore be highly tentative, highly uncertain, and radically contingent.

Our specific policy views, though we may rationally believe them to be the best available, will stand only a slight chance of being correct. They ought to stand the highest chance of being correct of all available views, but this chance will not be very high in absolute terms. Compare the choice of one’s politics to betting on the team most favored to win the World Series at the beginning of the season. That team does indeed have the best chance of winning, but most of the time our sports predictions are wrong, even if we are good forecasters on average [since even if the most-favored team has (say) a 25% chance of winning, that still means that the other 29 teams’ combined chances, despite each being less than 25% individually, will add up to 75% in aggregate, meaning that the actual most likely outcome is that one of those 29 other teams will end up winning]. So it is with politics and policy.

Our attitudes toward others should therefore be accordingly tolerant. Imagine that your chance of being right [in terms of your entire worldview, with all its thousands of constituent beliefs] is [higher than anyone else’s]. Yet there are many […] opposing [worldviews], so even if yours is best, you’re probably still wrong. Now imagine that your wrongness will lead to a slower rate of economic growth, a poorer future, and perhaps even the premature end of civilization (not enough science to fend off that asteroid!). That means your political views, though they are the best ones out there, will have grave negative consequences with [high] probability. […] In this setting, how confident should you really be about the details of your political beliefs? How firm should your dogmatism be about means-ends relationships? Probably not very; better to adopt a tolerant demeanor and really mean it.

As a general rule, we should not pat ourselves on the back and feel that we are on the correct side of an issue. We should choose the course that is most likely to be correct, keeping in mind that at the end of the day we are still more likely to be wrong than right. Our particular views, in politics and elsewhere, should be no more certain than our assessments of which team will win the World Series. With this attitude political posturing loses much of its fun, and indeed it ought to be viewed as disreputable or perhaps even as a sign of our own overconfident and delusional nature.

People like to act like whatever political or religious idea they’re espousing is a complete no-brainer, and that you’d have to be willfully ignorant not to see the blindingly obvious truth. But as Alexander points out (citing Yudkowsky), the fact that an issue is so contentious and so heavily debated suggests that in most cases, it’s not as self-evident as its supporters might believe it to be:

In one of the classics of the Less Wrong Sequences, Eliezer argues that policy debates should not appear one-sided. College students are pre-selected for “if they were worse they couldn’t get in, if they were better they’d get in somewhere else.” Political debates are pre-selected for “if it were a stupider idea no one would support it, if it were a better idea everyone would unanimously agree to do it.” We never debate legalizing murder, and we never debate banning glasses. The things we debate are pre-selected to be in a certain range of policy quality.

(To give three examples: no one debates banning sunglasses, that is obviously stupid. No one debates banning murder, that is so obviously a good idea that it encounters no objections. People do debate raising the minimum wage, because it has some plausible advantages and some plausible disadvantages. We might be able to squeeze one or two extra utils out of getting the minimum-wage question exactly right, but it’s unlikely to matter terribly much.)

He also explains how this effect can apply not only to theoretical policy debates, but also to individual news events:

The more controversial something is, the more it gets talked about.

A rape that obviously happened? Shove it in people’s face and they’ll admit it’s an outrage, just as they’ll admit factory farming is an outrage. But they’re not going to talk about it much. There are a zillion outrages every day, you’re going to need something like that to draw people out of their shells.

On the other hand, the controversy over dubious rape allegations is exactly that – a controversy. People start screaming at each other about how they’re misogynist or misandrist or whatever, and Facebook feeds get filled up with hundreds of comments in all capital letters about how my ingroup is being persecuted by your ingroup. At each step, more and more people get triggered and upset. Some of those triggered people do emergency ego defense by reblogging articles about how the group that triggered them are terrible, triggering further people in a snowball effect that spreads the issue further with every iteration.

Of course, needless to say, that doesn’t mean that every potentially contentious issue is necessarily a nuanced one with strong points on both sides. It’s totally possible to have a situation where one side of a debate is clearly the correct one, and yet the debate still doesn’t get resolved for some other reason. Maybe a particular idea really is obvious, for instance, but it just hasn’t been implemented yet because nobody cares about it all that much and it isn’t on anyone’s radar (aside from maybe certain narrow groups that have a vested interest in obstructing it). Abolishing the penny is one such niche idea that comes to mind. Or maybe there’s an idea that would be obvious to anyone with access to certain privileged information on the subject, but not everyone has access to such information. If you alone had been visited by aliens, for instance, the truth of their existence would be obvious to you but not to anyone else. Differences in religious belief and personal prejudice are other factors that can skew things for similar reasons.

Still, in most debates, where these special conditions don’t apply and the issues at hand are high-profile ones where everyone has access to roughly the same pool of information, it’s a good bet that you’re grossly oversimplifying the situation if you insist that the solutions are as obvious as prohibiting murder or not prohibiting sunglasses. They may seem obvious to you, but you can’t rightly call them obvious in a more general sense – because if they really were, then you wouldn’t be having to debate them in the first place.

Incidentally, this phenomenon also explains a lot about why the two main political parties in the US have the platforms that they do. Ever consider how strange it is that the country just happens to be almost exactly 50% liberal and 50% conservative on seemingly every major issue? That’s not because half the country has different beliefs from the other half, and by sheer coincidence the distributions of these beliefs all happen to break 50-50. It’s because the terms “liberal” and “conservative” are defined by the few points where the populace is split 50-50. In truth, on probably 99% of issues, everyone is pretty much in agreement. Murder should be illegal, everybody agrees. Sunglasses should be legal, everyone agrees. Nobody has to debate those issues where a popular consensus has already been reached, so the 1% of issues that are still unresolved and could go either way are the ones that the two parties triangulate their platforms around. In each of these edge cases, the two parties position themselves just slightly to the left and to the right of whatever the median voter’s position is, and adopt that position as their official platform – because if they positioned themselves at some more extreme point on the ideological spectrum where there wasn’t a 50-50 split in popular opinion (e.g. murder should be legal, sunglasses should be illegal, etc.), they would lose every election and their party would promptly go extinct. The fact that we have a status quo in which one party favors (say) legalizing marijuana, while the other wants to keep it illegal, is the only one that makes sense given the fact that the median voter currently falls roughly midway between these two positions. If either party suddenly decided to start advocating for a much more extreme position – e.g. that marijuana possession should be punishable by death, or that marijuana should be made mandatory and given to preschoolers – then the voting equivalent of natural selection would immediately wipe that party out of existence.

(Fun fact: This is also why you often see competing fast food restaurants located right next to each other; if either of them drifted too far away from the center of their particular area, they’d lose customers who were now closer to their competitors.)

It’s possible for the zeitgeist to shift over time, of course, and for the median position to become more liberal or more conservative than it used to be – and in fact we’ve seen this happen historically on almost every issue. But the point here is that as the median position shifts, the two parties’ platforms shift along with it – which is why mainstream conservatism no longer seeks to criminalize homosexuality, why mainstream liberalism no longer revolves around labor unions, and so on. The median opinion on those issues shifted in a new direction, and the two parties’ platforms shifted along with it. In political science, the formalized version of this is known as the Median Voter Theorem. The parties may be able to occasionally take stances that are a bit on the fringe – after all, most people don’t base their vote on just one issue, so pushing the margins on one or two issues won’t necessarily be a make-or-break prospect – but they can’t get too out of line or deviate from the mainstream on too many issues if they want to remain relevant. For better or for worse, popular consensus is what defines the range of viable political debate (also known as the Overton Window). And that means that if there’s a major public debate over a certain issue, you shouldn’t expect it to be one of the 99% of issues that have clear black-and-white answers that are universally obvious; there will probably be quite a bit of grey area involved, and each side will at least be able to make a case for their stance that’s reasonably plausible to the median voter.

In fact, when it comes to the really big political clashes that involve a lot of varied elements, it’s often the case that both sides have legitimate points in favor of their arguments, but are just focusing on subtly different aspects of the issue, so they appear to be more at odds with each other than they really are. You can have liberals lamenting the very real and serious problems of corporate consolidation and abuse of market inefficiencies, while conservatives decry the equally real instances of government waste and overregulation, and both sides can be correct – they’re just arguing two slightly different points. Yes, private-sector corruption is bad and should be reduced; and yes, public-sector corruption is also bad and should be reduced – but there’s no reason why these can’t both be true at the same time; it’s just that each side is emphasizing one over the other because it supports their narrative better.

(It’s for this reason, by the way, that you can often deduce a person’s general stance on an issue based simply on which part of that issue the person chooses to focus on when discussing it. If you’re discussing race and criminal justice, for instance, and the person’s first instinct is to start talking about gang violence and black-on-black crime, they’re probably a conservative – whereas if they start talking about police brutality and discriminatory sentencing practices, they’re probably a liberal. It’s not that either side is wrong – each of their chosen sub-topics is a legitimate problem that needs to be resolved – it’s just that these are two different questions, and the one they’re each choosing to focus on is the one that they feel bolsters their side more. (Incidentally, this is also why if you ever decide to focus on one of the issues that’s not one your side usually emphasizes – e.g. if you’re a liberal who wants to talk about crime in black neighborhoods, or if you’re a conservative who wants to talk about discriminatory sentencing practices – you may find yourself facing suspicions from your own side that you’ve secretly joined the enemy.) John Nerst has a must-read post on this whole dynamic here, explaining how something that one side might regard as a minor exception to the rule is often regarded by the other side as the very core of the issue, and vice-versa.)

There’s an old parable you might have heard of, about a group of blind men who encounter an elephant. Here’s the Wikipedia summary:

A group of blind men heard that a strange animal, called an elephant, had been brought to the town, but none of them were aware of its shape and form. Out of curiosity, they said: “We must inspect and know it by touch, of which we are capable”. So, they sought it out, and when they found it they groped about it. In the case of the first person, whose hand landed on the trunk, said “This being is like a thick snake”. For another one whose hand reached its ear, it seemed like a kind of fan. As for another person, whose hand was upon its leg, said, the elephant is a pillar like a tree-trunk. The blind man who placed his hand upon its side said the elephant “is a wall”. Another who felt its tail, described it as a rope. The last felt its tusk, stating the elephant is that which is hard, smooth and like a spear.

In some versions, the blind men then discover their disagreements, suspect the others to be not telling the truth and come to blows.

Ideological debates can be a lot like that. Each side is perfectly correct in their position, and thinks the other side must be either crazy or dishonest for disagreeing with them. But the other side is perfectly correct in their position too; it’s just that they’re concentrating on a different part of the whole.

If you can look past these different points of emphasis, though, and dig down to the underlying values motivating them, it actually turns out that liberals and conservatives often have more in common than they realize. Both groups, for instance, share the same core principle that it’s wrong for one group of people to dishonestly “game the system” and abuse government power to take hard-earned wealth from people who’ve actually worked for it and redistribute that wealth to themselves. The only difference is that conservatives are generally more inclined to think that the lower-income segments of society – welfare cheats, illegal immigrants, etc. – are the ones doing this, while liberals are more inclined to think that the wealthy – big corporations, bankers, CEOs, etc. – are the ones doing it. What they agree on is that ordinary working-class people are getting screwed, and are shouldering too much of the burden that should rightly be carried by others. And it’s the same story with a lot of other such issues; the two sides may differ in their perception of what the actual situation on the ground is, but if they could get on the same page regarding the facts – if they could both see the elephant in its entirety – then they wouldn’t have much left that they actually disagreed about. Their overall priorities, values, and interests would largely be the same.

Now having said that, of course, there are plenty of issues where this isn’t the case – at least not at the object level. There are some situations in which the opposing sides aren’t just looking at things from different perspectives and focusing on different areas, but actually have conflicting interests at stake. Things like differing incentives, imbalances of power, and mutually incompatible goals all come into play to some extent or another; and in some cases, these factors are decisive. So although it’s certainly a noble aim to try and resolve ideological disputes cooperatively in order to reach a common truth, it’s not always as simple as just dispassionately correcting the factual mistakes made by one or both sides until they’re both in complete agreement (an approach sometimes called “mistake theory”). Sometimes you have to take the extra step of accounting for divergent interests first – because otherwise, the two sides’ interests may just be irreconcilable (a school of thought known as “conflict theory”). Alexander elaborates on the differences between the “mistake theory” and “conflict theory” philosophies:

Mistake theorists treat politics as science, engineering, or medicine. The State is diseased. We’re all doctors, standing around arguing over the best diagnosis and cure. Some of us have good ideas, others have bad ideas that wouldn’t help, or that would cause too many side effects.

Conflict theorists treat politics as war. Different blocs with different interests are forever fighting to determine whether the State exists to enrich the Elites or to help the People.

Mistake theorists view debate as essential. We all bring different forms of expertise to the table, and once we all understand the whole situation, we can use wisdom-of-crowds to converge on the treatment plan that best fits the need of our mutual patient, the State. Who wins on any particular issue is less important creating an environment where truth can generally prevail over the long term.

Conflict theorists view debate as having a minor clarifying role at best. You can “debate” with your boss over whether or not you get a raise, but only with the shared understanding that you’re naturally on opposite sides, and the “winner” will be based less on objective moral principles than on how much power each of you has. If your boss appeals too many times to objective moral principles, he’s probably offering you a crappy deal.

Mistake theorists treat different sides as symmetrical. There’s the side that wants to increase the interest rate, and the side that wants to decrease it. Both sides have about the same number of people. Both sides include some trustworthy experts and some loudmouth trolls. Both sides are equally motivated by trying to get a good economy. The only interesting difference is which one turns out (after all the statistics have been double-checked and all the relevant points have been debated) to be right about the matter at hand.

Conflict theorists treat the asymmetry of sides as their first and most important principle. The Elites are few in number, but have lots of money and influence. The People are many but poor – yet their spirit is indomitable and their hearts are true. The Elites’ strategy will always be to sow dissent and confusion; the People’s strategy must be to remain united. Politics is won or lost by how well each side plays its respective hand.

It seems clear enough that a lot of ideological conflicts are based on the two sides simply having different object-level goals. Having said that, though, it does seem like people’s meta-level goals are almost always on more or less the same page – and that’s important. As Alexander points out, we all want to have a thriving economy, strong families, equal opportunity, freedom, justice, and so on. So there’s no reason to rule out reasoned discourse just because the two sides’ object-level interests might differ; all it means is that the reasoned discourse should be taking place at one meta-level higher, where everyone’s interests are the same, and the object-level asymmetries can be accounted for within that reasoned discourse, as just another one of the elements factoring into the debate. In other words, just because two sides’ interests are sometimes diametrically opposed in a zero-sum competition – like in a sports game – doesn’t mean that the two sides can’t still work cooperatively outside of that context to make sure that the game itself is set up in a fair way and that it’s optimally serving its intended purpose that they all desire to uphold (as sports leagues do routinely). Or to apply this logic to workers interacting with their bosses, for instance, it’s true that the imbalance of power will define the object-level debate of a worker asking for a raise, but that doesn’t mean that there’s nothing to be gained by taking the discussion up a meta-level and having a debate as a society over whether that power imbalance is justifiable in the first place, what the implications of that are, and so forth. People in different social positions, with different needs and preferences, can still work together and constructively debate how best to meet the fundamental terminal values that they all share. (And when they don’t – i.e. whenever you find two opposing sides who seemingly aren’t able to reach any kind of common understanding – it’s very often because they haven’t been able to zoom out to the appropriate meta-level, but have instead gotten stuck debating the object-level points of contention instead, which are downstream of where their fundamental disagreements actually lie. Instead of backing up and focusing on first principles first, and then going from there, opposing sides too often devote all their time and energy to arguing about the second-order issues that stem from those first-order assumptions – e.g. debating specific tax proposals instead of the fundamental philosophical justifications for taxation, or debating specific restrictions on abortion instead of the religious systems of morality underpinning those restrictions – and then wonder why those arguments never seem to get anywhere or change anyone’s mind. Ultimately, what they often conclude is that there’s simply no possible way of ever reaching any common ground; but that’s a mistake – their problem is simply that they’re trying to put the cart before the horse.)

Now again, needless to say, just because both sides of a debate may have understandable reasons for their positions, and just because it may be valuable to discuss those reasons, that doesn’t mean that both sides are always equally correct, or that there are always two equally valid sides to every argument, or that the truth always lies somewhere in the middle. As Yudkowsky puts it:

Do not think that fairness to all sides means balancing yourself evenly between positions; truth is not handed out in equal portions before the start of a debate.

And as Sagan adds:

Keeping an open mind is a virtue – but, as the space engineer James Oberg once said, not so open that your brains fall out. Of course we must be willing to change our minds when warranted by new evidence. But the evidence must be strong. Not all claims to knowledge have equal merit.

Mainstream news media outlets are often guilty of pretending otherwise – acting as though both sides of a debate are equal – for the sake of maintaining objectivity. But the important point here is that objectivity is not the same thing as neutrality; there can be cases where the facts simply support one side of a debate more than the other. If there’s a debate, for instance, over whether the president is a natural-born US citizen or whether he was born in Kenya, then clearly this is a case where one answer is simply right and the other is wrong – and acting like both sides have just as much validity doesn’t help anyone come any closer to understanding where the truth actually lies; if anything, it just muddies the waters further, which is the opposite of what good analysis should be doing. Noting that one side has all the facts and evidence in its favor, and that the other side doesn’t, may not be a neutral assessment of the debate – but it’s the only one that can rightly be called objective. So if intellectual objectivity is what you’re really after, you have to be willing to call a spade a spade rather than making false pretenses of equivalence.

Paul Krugman expresses his exasperation with media outlets that insist on putting neutrality above objectivity, even in crisis situations where one side is acting reasonably and the other isn’t:

Some of us have long complained about the cult of “balance,” the insistence on portraying both parties as equally wrong and equally at fault on any issue, never mind the facts. I joked long ago that if one party declared that the earth was flat, the headlines would read “Views Differ on Shape of Planet.”

[…]

The cult of balance has played an important role in bringing us to the edge of disaster. For when reporting on political disputes always implies that both sides are to blame, there is no penalty for extremism. Voters won’t punish you for outrageous behavior if all they ever hear is that both sides are at fault.

He continues:

And yes, I think this is a moral issue. The “both sides are at fault” people have to know better; if they refuse to say it, it’s out of some combination of fear and ego, of being unwilling to sacrifice their treasured pose of being above the fray.

It’s not necessarily a bad thing to want to be above the fray, of course; trying to maintain a dispassionate sense of judgment is an important part of being a good truth-seeker (more on that later). But if the debate at hand is over something really significant, then pretending that both sides of the debate are equal – insisting on false centrism just for its own sake – can be dangerously counterproductive. And in the most extreme circumstances, like if there’s a debate over whether it’s morally OK to enslave an entire race of people, or whether it should be legal to deny the vote to an entire gender, then neutrality is positively disastrous. As Archbishop Desmond Tutu puts it:

If you are neutral in situations of injustice, you have chosen the side of the oppressor. If an elephant has its foot on the tail of a mouse and you say that you are neutral, the mouse will not appreciate your neutrality.

Genuine malice may be rare, but where it does exist it’s important to confront it, in order to keep it from spreading unchecked and in order to deter other would-be wrongdoers.

Still, having said all this, it’s worth pointing out that even in such situations, some approaches are more productive than others. As Alexander explains:

Some people are so odious that an alarm needs to be spread. [It’s preferable] to err on the side of not doing that […] but sometimes the line will need to be crossed.

[…]

I think the most important consideration is that it be crossed in a way that doesn’t create a giant negative-sum war-of-all-against-all. That is, Democrats try to get Republicans fired for the crime of supporting Republicanism, Republicans try to get Democrats fired for the crime of supporting Democratism, and the end result is a lot of people getting fired but the overall Republican/Democrat balance staying unchanged.

That suggests a heuristic very much like Be Nice, At Least Until You Can Coordinate Meanness[…]: don’t try to destroy people in order to enforce social norms that only exist in your head. If people violate a real social norm, that the majority of the community agrees upon, and that they should have known – that’s one thing. If you idiosyncratically believe something is wrong, or you’re part of a subculure that believes something is wrong even though there are opposite subcultures that don’t agree – then trying to enforce your idiosyncratic rules by punishing anyone who breaks them is a bad idea.

And one corollary of this is that it shouldn’t be arbitrary. Ten million people tell sexist jokes every day. If you pick one of them, apply maximum punishment to him, and let the other 9.99 million off scot-free, he’s going to think it’s unfair – and he’ll be right. This is directly linked to the fact that there isn’t actually that much of a social norm against telling sexist jokes. My guess is that almost everyone who posts child pornography on Twitter gets in trouble for it, and that’s because there really is a strong anti-child pornography norm.

(this is also how I feel about the war on drugs. One in a thousand marijuana users gets arrested, partly because there isn’t enough political will to punish all marijuana users, partly because nobody really thinks marijuana use is that wrong. But this ends out unfair to the arrested marijuana user, not just because he’s in jail for the same thing a thousand other people did without consequence, but because he probably wouldn’t have done it he’d really expected to be punished, and society was giving him every reason to think he wouldn’t be.)

This set of norms is self-correcting: if someone does something you don’t like, but there’s not a social norm against it, then your next step should be to create a social norm against it. If you can convince 51% of the community that it’s wrong, then the community can unite against it and you can punish it next time. If you can’t convince 51% of the community that it’s wrong, then you should try harder, not play vigilante and try to enforce your unpopular rules yourself.

If you absolutely can’t tolerate something, but you also can’t manage to convince your community that it’s wrong and should be punished, you should work on finding methods that isolate you from the problem, including building a better community somewhere else. I think some of this collapses into a kind of Archipelago solution. Whatever the global norms may be, there ought to be communities catering to people who want more restrictions than normal, and other communities catering to people who want fewer. These communities should have really explicit rules, so that everybody knows what they’re getting into. People should be free to self-select into and out of those communities, and those self-selections should be honored. Safe spaces, 4chan, and this blog are three very different kinds of intentional communities with unusual but locally-well-defined free speech norms, they’re all good for the people who use them, and as long as they keep to themselves I don’t think outsiders have any right to criticize their existence.

Regardless of which angle you take, though, the crucial thing is just to be able to maintain enough perspective to recognize that not every issue is one of life and death, that not every issue is one of oppressors vs. victims, and that not every issue is one of fundamentally incompatible worldviews. Again, on the vast majority of topics, almost everyone is more or less on the same page; as Hank Green puts it, the challenge is usually “hard problems,” not “bad humans.” And even when disagreements arise, they aren’t typically irreconcilable rifts of earth-shattering consequence – they’re more a matter of trying to parse fine-grained specifics within the same general Overton Window. As Brennan writes:

It’s especially bizarre that mainstream political discussion is so heated and apocalyptic, given how little is at stake [in so many of the issues being discussed]. Republicans and Democrats disagree about many things, but in the logical space of possible political views they’re not merely in the same solar system but also on the same planet. They’re not debating deep existential questions about justice but instead surface disputes about the exact shape of the society they mutually accept. They’ve both agreed to buy the Camry; they’re now just debating whether to get the sport package or hybrid. Their disputes are [typically] tiny. Should we raise the top marginal income tax by 3 percentage points? Should we keep the minimum wage where it is or raise it by three dollars per hour? Should we pay $1 trillion a year for education or $1.2 trillion? Should employers be required to pay for birth control, or should women who work for closely held family corporations with fundamentalist owners have to pay ten to fifty dollars a month from their own pockets?

It’s one thing to be fighting for your life in an all-or-nothing conflict where the right side and the wrong side are unmistakably defined in absolute terms – like if you’re part of an ethnic minority trying to escape genocide or something. But such drastic circumstances are so rare that most Americans will never encounter them in their lifetimes; practically every contentious issue we encounter in everyday life – once you accurately recognize the positions that most people actually hold – turns out to be one of those more fine-grained disputes, where there are at least some ambiguous grey areas and the right answer isn’t necessarily obvious to everyone. And in such cases, a combative, absolutist approach just isn’t the right tool for the job. The best way to make progress in these areas is through collaborative truth-seeking – where the two sides with different perspectives put their heads together to compare ideas, help identify strong points and weak points in each other’s arguments, and jointly work their way a little closer to the broader truth. It’s not always necessary to agree with someone else’s worldview in this context – but being able to understand it, accurately and on its own terms, is a crucial step toward making legitimate long-term progress on the issue. As Ben Wave observes:

I find it more productive to try and cooperate with my opposites who like cooperation than to fight against my opposites who do not.

And commenter A7exrolance puts it this way:

Honestly, I hate this “me against them” mentality when discussing things like this. The highest form of discourse should be a mutual pursuit of truth. The use of antagonizing language (e.g. opponent) promotes the thinking that your objective should be to “win” an argument, which is how most people view heavy discussions which could otherwise serve to be more constructive. If people argued to learn instead of arguing to win, perhaps opposing sides as they are now would be able to come together and get closer to a resolvable truth.

To borrow an old piece of relationship advice, debate works best when it’s not “you vs. your opponent,” but rather “you and them vs. the problem.” The goal shouldn’t be to prove that you’re right and were right all along; the goal should simply be to find truth. Or as Ann Farmer puts it: “It isn’t about being right. It’s about getting it right.”

Ideally, “you” and “your opponent” shouldn’t even enter into the equation at all; as best you can, you should take your egos out of the picture entirely and just let the ideas be what compete against each other, not the people.

Of course, finding openings for this kind of adversarial collaboration can be hard in our current ideological environment, where seemingly everyone is arguing to win rather than arguing to learn – simply trying to display understanding rather than to gain understanding, as Galef puts it. So Alexander offers some guidelines that can be used to make debates more constructive:

Here’s what I think are minimum standards to deserve the [designation “Purely Logical Debate”]:

1. Debate where two people with opposing views are talking to each other (or writing, or IMing, or some form of bilateral communication). Not a pundit putting an article on Huffington Post and demanding Trump supporters read it. Not even a Trump supporter who comments on the article with a counterargument that the author will never read. Two people who have chosen to engage and to listen to one another.

2. Debate where both people want to be there, and have chosen to enter into the debate in the hopes of getting something productive out of it. So not something where someone posts a “HILLARY IS A CROOK” meme on Facebook, someone gets really angry and lists all the reasons Trump is an even bigger crook, and then the original poster gets angry and has to tell them why they’re wrong. Two people who have made it their business to come together at a certain time in order to compare opinions.

3. Debate conducted in the spirit of mutual respect and collaborative truth-seeking. Both people reject personal attacks or ‘gotcha’ style digs. Both people understand that the other person is around the same level of intelligence as they are and may have some useful things to say. Both people understand that they themselves might have some false beliefs that the other person will be able to correct for them. Both people go into the debate with the hope of convincing their opponent, but not completely rejecting the possibility that their opponent might convince them also.

4. Debate conducted outside of a high-pressure point-scoring environment. No audience cheering on both participants to respond as quickly and bitingly as possible. If it can’t be done online, at least do it with a smartphone around so you can open Wikipedia to resolve simple matters of fact.

5. Debate where both people agree on what’s being debated and try to stick to the subject at hand. None of this “I’m going to vote Trump because I think Clinton is corrupt” followed by “Yeah, but Reagan was even worse and that just proves you Republicans are hypocrites” followed by “We’re hypocrites? You Democrats claim to support women’s rights but you love Muslims who make women wear headscarves!” Whether or not it’s hypocritical to “support women’s rights” but “love Muslims”, it doesn’t seem like anyone is even trying to change each other’s mind about Clinton at this point.

These to me seem like the bare minimum conditions for a debate that could possibly be productive.

(See also Liam Rosen’s great guide to arguing constructively here.)

Upholding these standards isn’t always easy. Alexander continues:

The world is a scary place, full of bad people who want to hurt you, and in the state of nature you’re pretty much obligated to engage in whatever it takes to survive.

But instead of sticking with the state of nature, we have the ability to form communities built on mutual disarmament and mutual cooperation. Despite artificially limiting themselves, these communities become stronger than the less-scrupulous people outside them, because they can work together effectively and because they can boast a better quality of life that attracts their would-be enemies to join them. At least in the short term, these communities can resist races to the bottom and prevent the use of personally effective but negative-sum strategies.

One such community is the kind where members try to stick to rational discussion as much as possible. These communities are definitely better able to work together, because they have a powerful method of resolving empirical disputes. They’re definitely better quality of life, because you don’t have to deal with constant insult wars and personal attacks. And the existence of such communities provides positive externalities to the outside world, since they are better able to resolve difficult issues and find truth.

But forming a rationalist community isn’t just about having the will to discuss things well. It’s also about having the ability. Overcoming bias is really hard, and so the members of such a community need to be constantly trying to advance the art and figure out how to improve their discussion tactics.

That’s why another important part of fostering constructive discourse, in addition to upholding these standards yourself, is to hold your friends and allies accountable to them as well. Exerting a little positive peer pressure can go a long way toward keeping the more radical elements of your side from getting too unhinged and derailing the debate; and even a subtle shift in conversational norms can do wonders for creating an environment where ideas can be exchanged more freely. QualiaSoup and TheraminTrees illustrate this point with a personal story from their own lives:

One aspect of the culture at our secondary school was that students were expected to respond aggressively to insults, specifically insults to their families, especially their mothers. The insults didn’t need to be that imaginative. Often just the phrase “your mum” was enough. In response, students were expected to get angry and physically defend their mothers’ honor. If they didn’t, they were seen as weak and cowardly. This ritualized behavior continued for a couple of years; then there was an interesting shift. Aggressive reactions to insults stopped being admired. Students who were easily provoked came to be seen as weak and fragile, and were scorned for being hysterical. Shrugging off insults became the cool thing to do. The frequency of insults didn’t go down, but the frequency of aggressive responses plummeted.

[…]

In groups where dramatic expressions of offense are encouraged and rewarded, we can make two predictions. First, we can expect to see more expressions of offense. Second, we can expect to see them in response to increasingly smaller provocations, as individuals hunt for things to act offended about. But when overblown reactions of events receive scorn instead of sympathy, the emotional displays can fizzle out fast, exposing the fact that they’re unnecessary and well within personal control. If teenagers can learn to control their responses like this, shouldn’t we expect even more emotional maturity from adults?

The ability to resist your hostile impulses and actually work with your opponents can enable you to accomplish things that would be impossible within a strictly antagonistic dynamic; and if you’re able to get others to adopt a more collaborative mindset too, this effect is magnified all the more. It can seem galling at first to even consider the notion of cooperating with the enemy or compromising on sacred beliefs – but if both sides of a debate just take a breath and make a real effort to meet each other where they are, the results can sometimes be downright miraculous. As Pinker writes:

An ingenious rerouting of the psychology of taboo in the service of peace has recently been explored by Scott Atran, working with the psychologists Jeremy Ginges and Douglas Medin and the political scientist Khalil Shikaki. In theory, peace negotiations should take place within a framework of Market Pricing. A surplus is generated when adversaries lay down their arms – the so-called peace dividend – and the two sides get to yes by agreeing to divide it. Each side compromises on its maximalist demand in order to enjoy a portion of that surplus, which is greater than what they would end up with if they walked away from the table and had to pay the price of continuing conflict.

Unfortunately, the mindset of sacredness and taboo can confound the bestlaid plans of rational deal-makers. If a value is sacred in the minds of one of the antagonists, then it has infinite value, and may not be traded away for any other good, just as one may not sell one’s child for any other good. People inflamed by nationalist and religious fervor hold certain values sacred, such as sovereignty over hallowed ground or an acknowledgment of ancient atrocities. To compromise them for the sake of peace or prosperity is taboo. The very thought unmasks the thinker as a traitor, a quisling, a mercenary, a whore.

In a daring experiment, the researchers did not simply avail themselves of the usual convenience sample of a few dozen undergraduates who fill out questionnaires for beer money. They surveyed real players in the Israel-Palestine dispute: more than six hundred Jewish settlers in the West Bank, more than five hundred Palestinian refugees, and more than seven hundred Palestinian students, half of whom identified with Hamas or Palestinian Islamic Jihad. The team had no trouble finding fanatics within each group who treated their demands as sacred values. Almost half the Israeli settlers indicated that it would never be permissible for the Jewish people to give up part of the Land of Israel, including Judea and Samaria (which make up the West Bank), no matter how great the benefit. Among the Palestinians, more than half the students indicated that it was impermissible to compromise on sovereignty over Jerusalem, no matter how great the benefit, and 80 percent of the refugees held that no compromise was possible on the “right of return” of Palestinians to Israel.

The researchers divided each group into thirds and presented them with a hypothetical peace deal that required all sides to compromise on a sacred value. The deal was a two-state solution in which the Israelis would withdraw from 99 percent of the West Bank and Gaza but would not have to absorb Palestinian refugees. Not surprisingly, the proposal did not go over well. The absolutists on both sides reacted with anger and disgust and said that they would, if necessary, resort to violence to oppose the deal.

With a third of the participants, the deals were sweetened with cash compensation from the United States and the European Union, such as a billion dollars a year for a hundred years, or a guarantee that the people would live in peace and prosperity. With these sweeteners on the table, the nonabsolutists, as expected, softened their opposition a bit. But the absolutists, forced to contemplate a taboo tradeoff, were even more disgusted, angry, and prepared to resort to violence. So much for the rational-actor conception of human behavior when it comes to politico-religious conflict.

All this would be pretty depressing were it not for Tetlock’s observation that many ostensibly sacred values are really pseudo-sacred and may be compromised if a taboo tradeoff is cleverly reframed. In a third variation of the hypothetical peace deal, the two-state solution was augmented with a purely symbolic declaration by the enemy in which it compromised one of its sacred values. In the deal presented to the Israeli settlers, the Palestinians “would give up any claims to their right of return, which is sacred to them,” or “would be required to recognize the historic and legitimate right of the Jewish people to Eretz Israel.” In the deal presented to the Palestinians, Israel would “recognize the historic and legitimate right of the Palestinians to their own state and would apologize for all of the wrongs done to the Palestinian people,” or would “give up what they believe is their sacred right to the West Bank,” or would “symbolically recognize the historic legitimacy of the right of return” (while not actually granting it). The verbiage made a difference. Unlike the bribes of money or peace, the symbolic concession of a sacred value by the enemy, especially when it acknowledges a sacred value on one’s own side, reduced the absolutists’ anger, disgust, and willingness to endorse violence. The reductions did not shrink the absolutists’ numbers to a minority of their respective sides, but the proportions were large enough to have potentially reversed the outcomes of their recent national elections.

The implications of this manipulation of people’s moral psychology are profound. To find anything that softens the opposition of Israeli and Palestinian fanatics to what the rest of the world recognizes as the only viable solution to their conflict is something close to a miracle. The standard tools of diplomacy wonks, who treat the disputants as rational actors and try to manipulate the costs and benefits of a peace agreement, can backfire. Instead they must treat the disputants as moralistic actors, and manipulate the symbolic framing of the peace agreement, if they want a bit of daylight to open up. The human moral sense is not always an obstacle to peace, but it can be when the mindset of sacredness and taboo is allowed free rein. Only when that mindset is redeployed under the direction of rational goals will it yield an outcome that can truly be called moral.

In more general terms, constructive discourse isn’t just helpful for reconciling competing goals; if you’re really willing to put your heads together with people who have different outlooks from your own, it can sometimes lead both sides to explore new ground that you might not have even been aware of before. If you and your opponent are genuinely able to momentarily set aside the urgency of “winning the argument” and focus instead on figuring out where the truth actually lies, you can sometimes produce new insights working jointly which you might never have been able to come up with as individuals working on your own. As the old cliché goes, two heads are better than one – and if the two heads are approaching an issue from radically different angles, so much the better. McRaney elaborates:

According to the scientists who subscribe to [argumentation] theory, when we reason alone, that’s when we’re biased, that’s when we get ourselves into trouble, producing weak arguments that we believe are strong. It’s when we contribute our arguments to a pool, and then everyone together samples from that pool, and evaluates those arguments against one another, that reasoning can accomplish amazing things. It’s then that the poor arguments fail and the best arguments win. For instance, there’s this test in psychology called the Cognitive Reflection Task, which has these questions that people usually get wrong, like “If it takes five machines five minutes to make five widgets, how long would it take 100 machines to make 100 widgets?” Now, the answer to that is five minutes, but most people don’t get it right. “Each machine makes one widget per [five minutes]” is the solution. Now, reasoning alone, 83% of people who take that test under laboratory conditions answer at least one of the questions incorrectly. A third get all of them wrong. But in groups of three or more, no one gets any wrong. At least one member always sees the correct answer. And that person is often not very confident at first, but the resulting debate leads to the truth. Now, of course we have to take motivated reasoning into account here; people usually have a goal in mind when they’re arguing one thing or another. But when the goal is to be right, reasoning together often achieves that goal in a way that reasoning alone cannot. Cognitive scientist Tom Stafford says that when a group has developed a strong sense of trust and it faces a common goal, [then] when the majority is wrong, the few who are correct can bring the population around to the right answer. In fact, this is the whole idea behind argumentation theory.

The tricky part, of course, is developing that strong sense of trust and that sense that everybody shares a common goal. There are a few different ways this can be accomplished, though, depending on the situation. In some settings, for instance (like when a group is trying to work through a problem together as a team), it can change the whole dynamic just to have someone in the room who’s willing to set the tone, fearlessly going out on limbs and making bold statements, in order to show everyone else that it’s safe to do so without fearing backlash. As McRaney puts it:

Every team needs at least one asshole who doesn’t give a shit if he or she gets fired or exiled or excommunicated. For a group to make good decisions, they must allow dissent and convince everyone they are free to speak their mind without risk of punishment.

In other cases, it can be as straightforward as just making it an explicit rule that people won’t be judged or punished for weird or outrageous ideas. Fisher and Ury suggest that instead of framing the dialogue as a debate or an argument, it can be more productive to frame it as a freewheeling brainstorming session:

A brainstorming session is designed to produce as many ideas as possible to solve the problem at hand. The key ground rule is to postpone all criticism and evaluation of ideas. The group simply invents ideas without pausing to consider whether they are good or bad, realistic or unrealistic. With those inhibitions removed, one idea should stimulate another, like firecrackers setting off one another. In a brainstorming session, people need not fear looking foolish since wild ideas are explicitly encouraged.

And Yudkowsky echoes the benefits of refraining from judgment until even the most far-out ideas have been thoroughly explored. Citing Robyn Dawes, he elaborates:

From pp. 55-56 of Robyn Dawes’s Rational Choice in an Uncertain World. Bolding added.

Norman R. F. Maier noted that when a group faces a problem, the natural tendency of its members is to propose possible solutions as they begin to discuss the problem. Consequently, the group interaction focuses on the merits and problems of the proposed solutions, people become emotionally attached to the ones they have suggested, and superior solutions are not suggested. Maier enacted an edict to enhance group problem solving: “Do not propose solutions until the problem has been discussed as thoroughly as possible without suggesting any.” It is easy to show that this edict works in contexts where there are objectively defined good solutions to problems.

Maier devised the following “role playing” experiment to demonstrate his point. Three employees of differing ability work on an assembly line. They rotate among three jobs that require different levels of ability, because the most able – who is also the most dominant – is strongly motivated to avoid boredom. In contrast, the least able worker, aware that he does not perform the more difficult jobs as well as the other two, has agreed to rotation because of the dominance of his able co-worker. An “efficiency expert” notes that if the most able employee were given the most difficult task and the least able the least difficult, productivity could be improved by 20%, and the expert recommends that the employees stop rotating. The three employees and the a fourth person designated to play the role of foreman are asked to discuss the expert’s recommendation. Some role-playing groups are given Maier’s edict not to discuss solutions until having discussed the problem thoroughly, while others are not. Those who are not given the edict immediately begin to argue about the importance of productivity versus worker autonomy and the avoidance of boredom. Groups presented with the edict have a much higher probability of arriving at the solution that the two more able workers rotate, while the least able one sticks to the least demanding job – a solution that yields a 19% increase in productivity.

I have often used this edict with groups I have led – particularly when they face a very tough problem, which is when group members are most apt to propose solutions immediately. While I have no objective criterion on which to judge the quality of the problem solving of the groups, Maier’s edict appears to foster better solutions to problems.

This is so true it’s not even funny. And it gets worse and worse the tougher the problem becomes. Take Artificial Intelligence, for example. A surprising number of people I meet seem to know exactly how to build an Artificial General Intelligence, without, say, knowing how to build an optical character recognizer or a collaborative filtering system (much easier problems). And as for building an AI with a positive impact on the world – a Friendly AI, loosely speaking – why, that problem is so incredibly difficult that an actual majority resolve the whole issue within 15 seconds. Give me a break.

(Added: This problem is by no means unique to AI. Physicists encounter plenty of nonphysicists with their own theories of physics, economists get to hear lots of amazing new theories of economics. If you’re an evolutionary biologist, anyone you meet can instantly solve any open problem in your field, usually by postulating group selection. Et cetera.)

Maier’s advice echoes the principle of the bottom line, that the effectiveness of our decisions is determined only by whatever evidence and processing we did in first arriving at our decisions – after you write the bottom line, it is too late to write more reasons above. If you make your decision very early on, it will, in fact, be based on very little thought, no matter how many amazing arguments you come up with afterward.

And consider furthermore that We Change Our Minds Less Often Than We Think: 24 people assigned an average 66% probability to the future choice thought more probable, but only 1 in 24 actually chose the option thought less probable. Once you can guess what your answer will be, you have probably already decided. If you can guess your answer half a second after hearing the question, then you have half a second in which to be intelligent. It’s not a lot of time.

Traditional Rationality emphasizes falsification – the ability to relinquish an initial opinion when confronted by clear evidence against it. But once an idea gets into your head, it will probably require way too much evidence to get it out again. Worse, we don’t always have the luxury of overwhelming evidence.

I suspect that a more powerful (and more difficult) method is to hold off on thinking of an answer. To suspend, draw out, that tiny moment when we can’t yet guess what our answer will be; thus giving our intelligence a longer time in which to act.

Even half a minute would be an improvement over half a second.

As Yudkowsky illustrates, this technique of refraining from judgment until all possible answers have been explored isn’t just useful for making group dialogues more productive; it’s also a valuable tool for sharpening your own thinking as an individual – which, really, is even more fundamental. The best way to make the most of your exchanges with others, after all, is to first get your own thoughts straight; so ideally you should spend even more time honing your ideas in your own head than you spend bouncing them off other people. Knowing how to figure out what’s really true amidst all the noise is one of the most important skills a person can have; and the above technique of delaying judgment is only one method for avoiding mental traps – there are a lot more where that came from. So let’s now discuss a few of those.

Continued on next page →