Connect with us

Published

on

Follow Us

Follow Us @

Artificial intelligence reflects prejudices and social inequalities: this is how we can improve it

Facial recognition applications are increasingly widespread and include everything from mobile phone access to video surveillance or criminal investigation applications. But to what extent is this technology reliable? What hidden problems can we find in its use? And, above all, what can we do to minimize or even neutralize the impact of these problems?

Facial recognition seemed exclusive to humans and impossible for machines, but today it is a problem that has already been solved thanks to the recent evolution of deep learning neural networks and, in general, artificial intelligence. Thanks to them, the classic paradigm of detection, feature extraction and classification has become obsolete.

Advertisement
YOU MAY HAVE MISSED:
US and UK Accuse China of Espionage, Millions of People Become Victims!

It is no longer necessary to adjust dozens of parameters and mix different algorithms. Now networks simply learn from the data we feed them, which is conveniently tagged.

Many successes, but also errors

The results are spectacular in terms of the correct recognition ratios that are achieved with neural networks. The phone always recognizes you, biometric access to your company never fails, surveillance cameras always end up detecting the suspect.

Or maybe this is not always the case?

Robert Julian-Borchak Williams would surely disagree. This African-American citizen has the dubious honor of being the first to have been arrested because a facial recognition algorithm incorrectly identified him.

Advertisement

That this first mistake was committed with an African-American does not seem to be a coincidence. Although the case of Julian-Borchak Williams occurred in 2020, already in 2018 the researcher Joy Buolamwini published a study where she showed that facial recognition systems had difficulties in identifying dark-skinned women. Her work transcended the general public through the documentary Coded Bias (coded bias).

Because yes, it seems that women have also suffered discrimination from artificial intelligence. The famous Amazon algorithm that discriminated against resumes where the word “woman” appeared is its clearest exponent. Fortunately, it stopped being used when verifying this sexist tendency.

YOU MAY HAVE MISSED:
Twitter "no longer exists" after merger with X Corp, Musk's newly created company

The bias keeps showing up

To test whether it is possible to find biases in facial recognition models, in a recent experiment three groups of students were asked to (independently) analyze the performance of different models. The models examined were those used in the DeepFace library.

The evaluation aimed to choose the best model in terms of recognition percentage. In this case, the results obtained coincided approximately with those presented by the authors of the models. The few detected failures usually involved women and also dark-skinned people. Notably, face detection on people of color also failed on some models. It’s not that it didn’t recognize itself, but that the model didn’t even detect a face.

Advertisement

These models can also be used as estimators of gender, age, and ethnicity. For this experiment the model used was VGG-Face. The gender estimation worked quite well with European people, but not so well with Asian or African American people. The most common mistake was confusing women with men when they were from those ethnicities. The rest of estimators (age, ethnicity) did not work well. The division by ethnicity became clear that it was quite artificial and subject to multiple errors.

These results should not make us believe that this technology is useless. Actually, their recognition ability is superior to human in many cases. And, of course, at a speed unattainable by any human being.

YOU MAY HAVE MISSED:
Getting Stricter, Google Blocks Fake Emails to Fight Phishing and Spam Attacks

What can we do?

From our point of view, it is important to look at the possibilities that artificial intelligence has as a tool and not underestimate its use when we detect problems like the ones shown here. The good news is that, once problems are detected, initiatives arise and studies are carried out to improve its use.

The biases in the models appear for multiple reasons: a bad choice of the data, a bad labeling of the same, human intervention in the process of creating and choosing models, a bad interpretation of the results.

Advertisement

Artificial intelligence, considered a technological advance without prejudice, becomes a faithful reflection of one’s own biases and the inequalities of the society in which it develops. As this interesting article concludes, “it can be an opportunity to rebuild ourselves and not only achieve algorithms without bias, but also a more just and fraternal society.”

We have the technical tools to achieve it. Developers can find ways to test and improve their models. The initiative AI Fairness 360 is an example of this. But, perhaps, the most sensible thing to do is to use common sense and use artificial intelligence intelligently.

An example of the latter can be found in this study, where it is concluded that the best option to recognize people with guarantees is for humans and machines to collaborate. And also the approach of the Spanish National Police to make use of the ABIS facial recognition system: “It is always a person, and not the computer, who determines whether or not there is a similarity.”
The Conversation

YOU MAY HAVE MISSED:
Beauty Brand Estee Lauder Attacked by Two Groups of Ransomware Hackers, Company Data Leaked

Hilario Gómez Moreno, University professor. Signal Theory and Communications, University of Alcala; Georgiana Bogdanel, Computer Forensic Analyst at BDO Spain and PhD student in Forensic Sciences, University of Alcala and Nadia Belghazi Mohamed, Research Staff and PhD student in Forensic Sciences, University of Alcala

This article was originally published on The Conversation. Read the original.

Advertisement

Artificial intelligence reflects prejudices and social inequalities: this is how we can improve it

Follow TodaysGist on Google News  and receive alerts for the main Tech news FAQ questions, series entertainments and more!

FIRST REACTION FROM A READER:

Be the first to leave us a comment, down the comment section. click allow to follow this topic and get firsthand daily updates.

Advertisement

JOIN US ON OUR SOCIAL MEDIA: << FACEBOOK >> | << WHATSAPP >> | << TELEGRAM >> | << TWITTER >

Artificial intelligence reflects prejudices and social inequalities: this is how we can improve it

#Artificial #intelligence #reflects #prejudices #social #inequalities #improve
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending