Algorithms help people see and correct their biases, studies show

Algorithms are a staple of modern life. People rely on algorithmic recommendations to sift through deep catalogs and find the best movies, channels, information, products, people and investments. Because people train algorithms about their decisions – for example, algorithms that make recommendations on e-commerce sites and social media – algorithms learn and code human biases.

Algorithmic recommendations show a bias towards popular choices and information that reflects anger, such as partisan news. On a societal level, algorithmic biases perpetuate and exacerbate structural racial bias in the judicial system, gender bias in corporate hires, and wealth inequality in urban development.

Algorithmic bias can also be used to reduce human bias. Algorithms can reveal hidden structural biases in organizations. In a paper published in the Proceedings of the National Academy of Sciences, my colleagues and I found that algorithmic bias can help people better identify and correct biases in themselves.

The tendency in the mirror

In nine experiments, Begum Celikitutan, Romain Cadario and I had research participants rate Uber drivers or Airbnb listings on their driving skill, reliability, or likelihood of renting the listing. We gave participants relevant data, such as the number of trips they took, a description of the property, or a star rating. We also included a piece of biased irrelevant information: a photo showed the age, gender and attractiveness of drivers, or a name that suggested the listing hosts were white or Black.

After participants made their ratings, we showed them one of two rating summaries: one showing their own ratings, or one showing the ratings of an algorithm trained on their ratings. We told participants about the bias factor that could influence these ratings; for example, Airbnb guests are less likely to rent from hosts with African American names. We then asked them to estimate how much influence the bias had on the ratings in the summaries.

Whether participants assessed the influence of race, age, gender or attractiveness bias, they saw more bias in ratings made by algorithms than themselves. This algorithmic mirror effect was whether the participants judged the ratings of the real algorithms or we showed the participants their own ratings and deceptively told them that an algorithm had made those ratings.

Participants saw more bias in the algorithms’ decisions than in their own, even when we gave participants a monetary bonus if their bias judgments matched the judgments made by another participant who saw the same decisions. The algorithmic mirror effect existed even if participants were in a marginalized category – for example, by identifying as a woman or as Black.

Research participants were just as able to spot biases in algorithms trained on their own decisions as they were to spot biases in the decisions of others. Also, participants were more likely to see the influence of racial bias on the algorithms’ decisions than their own decisions, but were equally likely to see the influence of defensible features, such as star ratings, on the decisions of the algorithms and on their own. decisions.

Blind spot bias

People see more of their bias in algorithms because the algorithms remove the blind spots of people’s biases. It is easier to see biases in other people’s decisions than your own because you use different evidence to judge them.

When examining your decisions for bias, you look for evidence of conscious bias – whether you considered race, gender, age, status, or other unwarranted factors when making a decision. You forget and excuse bias in your decisions because you don’t have access to the associative machinery that drives your intuitive judgments, where the bias is often involved. You might think, “I didn’t think about their race or gender when I hired them. I hired them on merit alone.”

When examining other people’s decisions for bias, you don’t have access to the processes they used to make the decisions. So you scrutinize their decisions for bias, where bias is obvious and harder to excuse. You might see, for example, that they only hired white men.

Algorithms remove the biased blind spot because you see algorithms more like what you see other people than yourself. Algorithms’ decision-making processes are a black box, similar to how other people’s ideas are inaccessible to you.

Participants in our study who were more likely to exhibit the bias blind spot were more likely to see bias in the algorithms’ decisions than in their own.

Humans also externalize the bias in algorithms. Seeing bias in algorithms is less threatening than seeing bias in yourself, even when algorithms are trained on your preferences. People blame algorithms. Algorithms are trained on human decisions, but people call the display bias “algorithmic bias”.

Corrective lens

Our experiments show that people are also more likely to correct their biases when they are reflected in algorithms. In a final experiment, we gave participants the opportunity to correct the ratings they judged. We showed each participant their own ratings, which we attributed to the participant or an algorithm trained on their decisions.

Participants were more likely to correct the ratings when attributed to an algorithm because they believed the ratings were more biased. As a result, the final corrected ratings were less biased when attributed to an algorithm.

Algorithmic biases with harmful effects have been well documented. Our results show that algorithmic bias can be leveraged for good. The first step in correcting bias is to recognize its influence and direction. As mirrors that reflect our biases, algorithms can improve our decision-making.

This article is republished from The Conversation, a non-profit, independent news organization that brings you reliable facts and analysis to help you make sense of our complex world. It was written by: Carey K. Morewedge, Boston University

Read more:

Carey K. Morewedge does not work for, consult with, own shares in, or receive funding from any company or organization that would benefit from this article this article, and did not disclose any relevant connections beyond their academic appointment.

Leave a Reply

Your email address will not be published. Required fields are marked *