Confirmation Bias as a Strategy for Collective Cognition

Guest post by Jason Cockrell, reprinted with permission.

Preface

Confirmation bias is the tendency for a person to accept a given proposition as fact based on limited or ambiguous evidence if he is primed ahead of time to lean toward that viewpoint, when the same evidence would not be convincing without priming. Confirmation bias is considered a fallacious form of thinking because the objective merit of the evidence is the same regardless of the thinker’s initial unsubstantiated inclinations, but his interpretation of that evidence is biased toward confirming his preconceptions.

Susceptibility to confirmation bias is thus by-definition irrational in the scientific sense – it does not give the individual thinker the greatest probability of arriving at a true model of the world. However, Eli Harman writes that confirmation bias can be rational in the utility-maximizing sense if the cost of false negatives (failing to hold that a certain proposition is true when it is) is much greater than the cost of false positives (holding a proposition as true when it is not). As an example, he proposes the scenario of trusting or mistrusting a member of an ostensibly disreputable minority group. Mistrusting errantly incurs only a modest fixed cost, a specific social interaction that could occur but does not, whereas trusting errantly incurs unknown and potentially disastrous costs. Thus it can be economically optimal to apply a cognitive process that is not scientifically optimal.

However, Eli’s analysis does not clearly address the converse scenario, namely, that a false positive might be more costly than a false negative. In this case, confirmation bias would be actively risky relative to a more agnostic approach, since the tendency to form conclusions exposes one to the outsize risk of false positives. One example of such is the need for omnivorous creatures to explore a wide variety of food sources in times of famine. So many foods that humans regularly consume are toxic in their raw form, and had our ancestors been closed-minded, they might never have discovered the preparation techniques required to make these foods palatable. It is conceivable that real-world challenges correspond to Eli’s proposed scenario more frequently than mine, but this is not obvious, and not demonstrated.

I propose a more comprehensive theory for the origin of cognitive bias, and one with broader implications for epistemology. To preserve generality, I make no assumptions about the relative costs of a false positive versus a false negative, but only that both outcomes are inferior to a *true* positive or negative. In other words, I allow for all utility functions that uniformly prefer truth over falsehood.

Lying Blues

Imagine a society consisting of 95% green people and 5% blue people. Suppose that a certain behavior is five times more common among blue people than green people, but the behavior is nonetheless rare overall in all people. This could be a positive or negative behavior, but either way, most interactions won’t involve the behavior. Just to use some specifics, call this behavior ‘lying’, and suppose green people lie 1% of the time they speak while blue people lie 5% of the time. Note that although blue people lie five times more frequently, green people still tell four times more lies due to their greater share of the population at large.

Now suppose an individual scientific truth-seeker seeks to detect a propensity for lying in a demographic group. This need not be a conscious process, just a logical consequence of the tendency of all people to notice patterns. Reasonably, he might induce that a demographic has a propensity for lying if members of that group lie to him three times in a row. Three lies would constitute a pattern, which he would notice and generalize into a stereotype.

However, this scenario almost never occurs. When the truth-seeker interacts with green people, they tell three lies in a row only 1% * 1% * 1% = 0.0001% of the time. When he interacts with blue people, they tell three lies in a row only 5% * 5% * 5% = 0.0125% of the time. Although the latter scenario occurs 125 times more frequently than the former, it is still so rare as to nearly never occur, especially given that interactions with blue people are already rare. In all probability, an individual looking for a three-lie pattern to detect a demographic propensity for lying will live his whole life never realizing that blue people are more prone to lying than green people.

Confirmation Bias

Consider the same society proposed above, but instead of applying a rational inductive process, let the truth-seeker use confirmation bias. With this bias, he will consider a demographic to be particularly prone to lying if he is first primed with the stereotype verbally, then encounters a lie from that group *once* after the priming. If he is primed with the suggestion of a stereotype but does not encounter a lie in his next interaction, he forgets the priming and returns to an agnostic cognitive state. However, if he does encounter a lie after the priming, then the stereotype is ‘confirmed’ (despite the lack of a true pattern). Once confirmed, he will spread the stereotype by verbal suggestion to others.

To stimulate the confirmation process, imagine inducing initial biases in the society through random jitter. That is, pick people totally at random, and whisper a stereotype in their ear. For fairness, let the stereotype alternately be “Blue people tend to lie more than others” or “Green people tend to lie more than others.” This randomness can be achieved with a literal coin-flip. The stereotype has no correspondence to reality and is not intended to have any such correspondence to reality. It is merely a random seed – some people are told that blue people lie more, and others are told that green people lie more.

If the next interaction they have with the member of the specified demographic is consistent with the seeded stereotype, it is confirmed. If not, it is forgotten. After one iteration of this process, 99% of the people told that greens lie more often will have forgotten it, but only 95% of those told that blues lie more often will. In other words, within one iteration, 5 times more people will hold the ‘confirmed’ stereotype that blues are liars than will hold the ‘confirmed’ stereotype that greens are liars, even though the initial seed was an even 50-50.

Those who have the stereotype ‘confirmed’ spread it by verbal suggestion to others, while those who do not, forget it. Through gossip, the stereotype that is ‘confirmed’ more frequently is also suggested more frequently. Thus going into the next round, five times more people will have been verbally primed to believe that blues are liars than that greens are liars.

It follows straightforwardly that the stereotype that blues are liars will spread over time at a much faster pace than the competing stereotype that greens are liars. After several iterations, the blue stereotype will crowd out the green stereotype and reach a point of social consensus. Thus it will be general ‘knowledge’ among most people in society that blues are not to be trusted while greens are fine. This truth will be ‘known’ despite no particular person ever encountering the tell-tale three-lie pattern that would inductively lead one to conclude blues are liars.

A fun exercise for the reader is to model this scenario programmatically, using variables for the rate of lying in each demographic (prevalence), the number of people to whom each individual will communicate a stereotype once it has been ‘confirmed’ for him (growth rate), and potentially other variables for forgetfulness. There is likely a social ‘carrying capacity’ for stereotypes which emerges from these factors, but that is out of scope.

Frequency Over Rigor

Lest the significance of the above go unnoticed, it should be explicitly mentioned that the true stereotype – blues tend to lie more often – manages to become widely known among the populace despite (1) the initial ‘seed’ stereotypes being totally random and (2) no individual person ever experiencing enough direct evidence to induce the stereotype themselves.

Absent the priming by verbal suggestion, no one would ever conclude from any single instance of lying that any particular demgoraphic is more prone to lying than any other. To be clear, a single instance is by-definition never a pattern. No matter how politically incorrect or open to stereotyping one allows, there is no way to induce from the single act of being told a lie by a blue person that, therfore, blue people generally tend to lie. Indeed, perhaps it is tall people – if the person is tall – or redheaded people – if the person is redheaded – who are prone to lying. A pattern simply cannot be extrapolated from a single instance, all political correctness aside.

It is only by introducing a priming, the initial seed of a suggestion that blues tend to be liars, that confirmation bias can cause a person to conclude this to be true based on a single experience, a single lie. Yet other individuals were primed with the stereotype that greens tend to be liars, and when they encountered a lying green, they felt that the stereotype was confirmed, as well. The rigor of the evidence is the same in both cases – a random suggestion of a stereotype followed by a single instance which confirms it. In neither case is the evidence scientifically rigorous, and in both cases, the ‘confirmation’ occurs by statistical chance.

Thus the true stereotype becomes more prevalent due to the greater *frequency* of its confirmation. Because more blues actually do lie, more people seeded with the ‘lying blue’ stereotype feel that it is confirmed. However, each individual who felt a confirmed stereotype – in either direction – encountered the same level of evidence – a single event of ‘confirmation’. From the standpoint of individual cognition, those who believe blues lie more often and those who believe greens lie more often feel the same level of certainty, and have the same level of justification for their (conflicting) beliefs.

Collective Cognition

It follows from the above that confirmation bias plays a critical role in collective cognition. This is a stochastic truth-seeking process in which the error of some individuals holding incorrect views is acceptable from the perspective of the broader society which tends, on average, to gravitate toward accurate views. Each person is a laboratory for testing the tendency of a hypothesis to hold up against reality. False hypotheses will seem to be confirmed for some people, but not most people. True hypotheses will fare far better, and thus will get repeated. Over several iterations of gossip followed by experience, the narratives that more accurately describe the world will displace those which do so less accurately.

That this benefit of confirmation bias has not been generally noticed in discourse is a consequence of the Western rationalist and scientific concept of epistemology. Under rationalism, the purpose of thought itself is for the individual to assess the evidence and deduce accurate conclusions. Any biases which interfere with *his* discovering the truth are to be discarded. Confirmation bias is clearly an example of such, alongside naturalistic and nativistic biases, anchoring, the gambler’s fallacy, etc.

It is profoundly contrary to science and rationalism to suppose that the purpose of thought might not be individually rational – in humans, anyway. This concept is quite familiar in ants, though, where scientists have long understood that knowledge is distributed throughout the colony. Each individual ant has little understanding of the macro structure of the colony and almost no sense of its purpose or the greater direction of its missions, yet the effective behaviors of construction, scavenging, hoarding, defense, and invasion that characterize successful colonies nonetheless manifest out of the collectively-held knowledge of the many ants.

Thus it may be beneficial for the human species, and probably for tribes and nations, that humans often give in to the instinct of confirmation bias and allow themselves to be persuaded by shaky evidence if such evidence is consistent with views that have been expressed by others, especially others from their own tribe. In doing so, the individual sacrifices some degree of his own reasoning, but he gains access to a distributed network that includes the aggregated and distilled experiences of many others. For the Western man, this constitutes a bit of a leap of faith, but evolution has apparently rewarded those who take the leap more than those who do not.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s