Artificial intelligence reflects prejudices and social inequalities: this is how we can improve it

Facial recognition applications are increasingly widespread and include everything from mobile phone access to video surveillance or criminal investigation applications. But to what extent is this technology reliable? What hidden problems can we find in its use? And, above all, what can we do to minimize or even neutralize the impact of these problems?

Facial recognition seemed exclusive to humans and impossible for machines, but today it is a problem that has already been solved thanks to the recent evolution of deep learning neural networks and, in general, artificial intelligence. Thanks to them, the classic paradigm of detection, feature extraction and classification has become obsolete.

It is no longer necessary to adjust dozens of parameters and mix different algorithms. Now networks simply learn from the data we feed them, which is conveniently tagged.

Many successes, but also errors

The results are spectacular in terms of the correct recognition ratios that are achieved with neural networks. The phone always recognizes you, biometric access to your company never fails, surveillance cameras always end up detecting the suspect.

Or maybe this is not always the case?

Robert Julian-Borchak Williams would surely disagree. This African-American citizen has the dubious honor of being the first to have been arrested because a facial recognition algorithm incorrectly identified him.

Correct recognition ratios of different facial recognition models. The data has been provided by the authors of the algorithms and has been obtained from the Labeled Faces in the Wild (LFW) and Youtube Faces in the Wild (YTF) databases.

That this first mistake was committed with an African-American does not seem to be a coincidence. Although the case of Julian-Borchak Williams occurred in 2020, already in 2018 the researcher Joy Buolamwini published a study where she showed that facial recognition systems had difficulties in identifying dark-skinned women. Her work transcended the general public through the documentary Coded Bias (coded bias).

Because yes, it seems that women have also suffered discrimination from artificial intelligence. The famous Amazon algorithm that discriminated against resumes where the word “woman” appeared is its clearest exponent. Fortunately, it stopped being used when verifying this sexist tendency.

The bias keeps showing up

To test whether it is possible to find biases in facial recognition models, in a recent experiment three groups of students were asked to (independently) analyze the performance of different models. The models examined were those used in the DeepFace library.

The evaluation aimed to choose the best model in terms of recognition percentage. In this case, the results obtained coincided approximately with those presented by the authors of the models. The few detected failures usually involved women and also dark-skinned people. Notably, face detection on people of color also failed on some models. It’s not that it didn’t recognize itself, but that the model didn’t even detect a face.

These models can also be used as estimators of gender, age, and ethnicity. For this experiment the model used was VGG-Face. The gender estimation worked quite well with European people, but not so well with Asian or African American people. The most common mistake was confusing women with men when they were from those ethnicities. The rest of estimators (age, ethnicity) did not work well. The division by ethnicity became clear that it was quite artificial and subject to multiple errors.

file 20230220 18 75c3s2.png?ixlib=rb 1.1
How can they confuse them with men?

These results should not make us believe that this technology is useless. Actually, their recognition ability is superior to human in many cases. And, of course, at a speed unattainable by any human being.

What can we do?

From our point of view, it is important to look at the possibilities that artificial intelligence has as a tool and not underestimate its use when we detect problems like the ones shown here. The good news is that, once problems are detected, initiatives arise and studies are carried out to improve its use.

The biases in the models appear for multiple reasons: a bad choice of the data, a bad labeling of the same, human intervention in the process of creation and choice of models, a bad interpretation of the results.

Artificial intelligence, considered a technological advance without prejudice, becomes a faithful reflection of one’s own biases and the inequalities of the society in which it develops. As this interesting article concludes, “it can be an opportunity to rebuild ourselves and not only achieve algorithms without bias, but also a more just and fraternal society.”

We have the technical tools to achieve it. Developers can find ways to test and improve their models. The initiative AI Fairness 360 is an example of this. But, perhaps, the most sensible thing to do is to use common sense and use artificial intelligence intelligently.

An example of the latter can be found in this study, where it is concluded that the best option to recognize people with guarantees is for humans and machines to collaborate. And also the approach of the Spanish National Police to make use of the ABIS facial recognition system: “It is always a person, and not the computer, who determines whether or not there is a similarity.”

Related articles

Jorge Almirón and the draw with Palmeiras: “We were superior”

The Boca coach, Jorge Almironmaintained that his team "was superior" to Palmairas and that they deserved to win in the match that ended goalless...

Professional development through podcasts: another way to listen

When we develop a process of change and improvement we always break a previous scheme. We do it because...