Skip to main content

Verified by Psychology Today

Bias

What Have We Learned About Implicit Bias?

Implicit bias has been much in the news. A new paper explores it in detail.

"Day 123 - Discrimination" by Laszlo Gyarmati is licensed under CC BY-NC-SA 2.0
Source: "Day 123 - Discrimination" by Laszlo Gyarmati is licensed under CC BY-NC-SA 2.0

People are biased in the way that they treat the people and objects in the world around them. You might be predisposed to like other people who attended the same college that you did, or to gravitate toward products that you have seen in advertisements. Biased behavior has particular social significance when it leads to systematic negative treatment of people based on factors like gender, race, ethnicity, or sexual orientation.

As a result, psychologists have spent time trying to understand the basis of biased behavior. An intriguing line of work that has garnered a lot of attention from both researchers and the popular press involves a construct called implicit bias, which can be contrasted with explicit bias. In the context of bias toward social groups, explicit bias involves being aware that one has a negative attitude toward a particular group.

The conceptualization of implicit bias assumes that people may also regard groups negatively without any awareness that they are doing so.

This research was spurred by the development of measures like the Implicit Association Test (IAT), which has people make speeded responses to items reflecting pairs of groups in order to assess whether people have stronger associations between one of those groups and negative information.

As a representative example of early findings in this literature, participants asked to classify faces of Black and White people were faster to do so when the response key for responding that the face was Black was also used to respond to negative items (rather than positive items). This pattern suggests that these participants have a bias against Black people. However, the same participants would also rate how much they like White and Black people, and those ratings did not show this pattern of bias. Furthermore, when participants were informed that their performance on the implicit measure revealed bias, they were surprised at that outcome.

There has been a lot of controversy within the psychology literature about what tests like the IAT actually measure and whether there really is such a thing as implicit bias. An excellent review paper in the July 2019 issue of Perspectives on Psychological Science by my University of Texas colleague Bertram Gawronski explores this issue. He draws six lessons from a careful analysis of the research on implicit attitudes over the past two decades. I will discuss a few of those lessons here.

The first important lesson is that there is no good evidence that people are unaware of their biases. While people do often express surprise at the score they get on the Implicit Association Test, that often reflects the labels that are used to express the degree of bias rather than the presence of some difference in attitudes toward different groups. Furthermore, Gawronsky points out that participants who are asked to predict what score they will get on an IAT after the test is described to them do a pretty good job of predicting their scores.

Another important lesson is that the low correlation between measures like the IAT (thought to test implicit processes) and explicit ratings often reflects that these measures use very different materials. For example, people might take an IAT using faces of Black and White people, but the ratings might focus on the endorsement of social policies aimed at Black and White People. When the differences in materials are controlled more carefully, the differences between the measures get smaller.

Gawronsky also points out that measures like the IAT are less stable over time than people’s ratings. That is, if you give someone an IAT several times, their score will differ more than if you give people a rating task several times. This finding suggests that tests like the IAT are sensitive to someone’s recent experience and casts doubt on the idea that there is a stable underlying attitude that tests like this are measuring.

Several companies have recently employed training programs to try to reduce implicit bias in their employees. These efforts are well-intentioned, but the research suggests that the long-term effects of these programs are limited. Gawronski points to an analysis of 17 interventions: Nine of them reduced the amount of bias people displayed on an IAT immediately after the intervention, but none had significant long-term effects on the IAT scores people received.

There are two potential reasons why the long-term effects of these programs are small (at best). One is that people’s scores on the IAT may also reflect other information they encounter after taking the test for the first time.

The second is that interventions aimed to reduce bias may have effects that are specific to the context in which they are presented. If you take an online course or a classroom course aimed at reducing bias, the influence of that training may not extend to the workplace. Instead, the associations measured by tests like the IAT may be highly specific and their influence on behavior might also be specific.

So, where does this leave us?

Given the current state of the evidence, there is little reason to think there are strong forces that lead to bias that people are unaware they have. They may not choose to admit those biases to other people, but they are typically aware, it seems, that those biases exist.

Measures like the IAT may still be useful in predicting behavior in situations that share elements of what people are doing when taking the IAT. If people have to respond quickly, then performance on the IAT is likely to be a better predictor of what a person is going to do than if they have time to deliberate. For example, a lot of things other people do are ambiguous: Was that person you just talked to confident or arrogant? A snap judgment about a behavior might hinge on your overall attitude about an individual.

Finally, from a practical perspective, there is a lot more work that needs to be done to understand what kinds of training will have long-term influences on bias in the workplace. Rather than focusing on changing performance on tests like the IAT, training programs could focus on interventions that lead to changes in particular behaviors. That is easier said than done, of course, but just reducing measured bias will not necessarily lead to behavior change.

References

Gawronski, B. (2019). Six lessons for a cogent science of implicit bias and its criticism. Perspectives on Psychological Science, 14(4), 574-595.

advertisement
More from Art Markman Ph.D.
More from Psychology Today