How to Hack Confirmation Bias

The following is my restatement of Jason Cockrell’s theory of confirmation bias as a collective cognition strategy:

There are a great many instances where making a generalization could be useful, helpful, or necessary. But most people aren’t in posession of enough information to make rigorous and defensible generalizations very often. what people do instead is constantly form hypotheses, or adopt ones they hear, on rather flimy grounds. If a thought occurs to me, or if I hear an observation or speculation from someone else, and then soon after see some fact or sitation that appears to correspond to that hypothesis, then that hypothesis will be “confirmed” (in my mind.) And each subsequent “confirmation” will tend to make it seem more compelling, to me. Epistemically, this one off correspondance (or even a pattern of correspondance) means nothing. It could be coincidence. It could be random chance. There could be something going on, but something *other* than I speculated, etc… But what it causes me to do is adopt the hypothesis as a predictive model for myself and restate it to others (until it is disconfirmed to my satisfaction.) If their experience does not confirm (disconfirms) my hypothesis then they will quickly forget about it. They’re hearing random hypotheses all the time and many of them don’t hold, and are therefore discarded. But, if THEIR experience “confirms” the hypothesis, in their own mind, then they will adopt it and restate it to still others.
The implication should be obvious. Confirmation bias will cause all people, some of the time, to adopt false hypotheses and act as if they were true, just by random chance. Thinking those hypotheses true, they will then restate them to others. But false hypotheses will tend to fizzle out and die, as others will not adopt them consistently if they are not subsequently “confirmed” in their own experience. True hypotetheses, on the other hand hand, those which correspond to reality, those with consistent predictive power, will tend to spread further and faster, until they attain the status of common knowledge, or widely known stereotype. What tends to produce accurate hypotheses and stereotypes is not the cognitive processes and strategies of any given individual (for these are indeed biased and flawed) but the iterated spread of ideas through a population over time. And research show that this is indeed effective. Commonly held stereotypes correspond to reality with a correlation of between .4 and .9, with an average correlation of about 0.8.

How a Rebellious Scientist Uncovered the Surprising Truth About Stereotypes

In other words, stereotypes are an extremely accurate description of reality. And that description, of sometimes very subtle phenomena, is accurate not because anyone has the means to probe them adequately themselves, but because their inadequate means, taken together, amount to an extremely powerful engine of empirical research, of conjecture and refutation.

Every individual is a laboratory for testing hypotheses. Confirmation bias causes individuals, taken in isolation, to believe wrong ideas are true. But it is tremendously valuable in sorting hypotheses, which to kill, and which to submit to others for further testing (for that’s really what people are doing when they “adopt a hypothesis as true.”) With time and repetition, the consensus tends to converge on the truth.

Jason gave us a hypothetical example. Suppose there are two kinds of people, green people and blue people. Green people are 95% of the population and tell the truth 99% of the time. Blue people are 5% of the population and lie 5% of the time. How are people to discover that blue people are less trustworthy (five times less trustworthy?) Well, start out, at random, with the hypotheses “green people lie” and “blue people lie” by coin flip if necessary. The “green people lie” hypothesis will be confirmed very rarely and spread very slowly. The “blue people lie” hypothesis will be confirmed more often and spread more rapidly, and moreover, this effect will snowball and compound, despite the fact that blue people still tell the truth most of the time, and most green people interact with blue people very rarely (they’re only 5% of the population.)

But there is a catch. What if the blue people lie much more than 5% of the time? It could be that most of them lie much of the time, but they tell very subtle lies like “there is no difference in the rate at which blue people and green people lie.” How would you catch them in such a lie? Who’s keeping statistics on such things? That’s a lie, incidentally, that would be “confirmed” the vast majority of the time, since (according to our stipulations) the vast majority of the time, it is impossible to catch either the blue people or the green people in a lie. 

If they repeat that lie enough, that there is no difference in the rate of lying, they can get it accepted as a consensus, according to the mechanisms earlier outlined, and then proceed to invoke altrusitic punishment and social sanction against anyone who questions it… (“That’s preposterous! You should be ashamed to say such a thing! You’re a bad person for even thinking such a thing!”)

Extra credit. Model this scenario and determine what kind of gap or delta can be created between the consensus “there is no difference in the rate of lying” and the reality of measurable differences in verifiable and actionable fraud and deception, and what it costs to maintain, in terms of repetition.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s