The face recognition complication

Artificial Intelligence is a subject which is often, inaccurately, talked about in terms of ending the world.  Several films have a theme of a rogue AI destroying the whole world and threatening the existence of humanity. They show a very physical manifestation of AI; however, the problem is much more personal and all the more mordacious. Our faces are collected and are being used to build increasingly powerful, yet error-ridden and human bias induced AI models. A recent survey found that on average, a Londoner is captured on 300 security cameras a day [1]. With this ominous beginning, its time to talk about a specific form of dangers posed by AI- facial recognition based discrimination especially among minority groups.

There has been centuries of oppression of minority groups by humans, it now seems like AI is also adding to it. Amazon created a bot to help with resume sorting, keyword searches based on skills and job profiles, and then recommend candidates. The caveat? The AI showed a bias against women, because it had been fed male-dominated resumes [2]. This kind of bias is an unwanted result, a danger posed by relying too much on AI to help with tasks.

Speaking of blatant discrimination, Michael Kosinski, an AI researcher, published a paper in 2017, where an AI model guessed whether a person was straight or otherwise, just by looking at their facial features [3]. The political, ethical and moral implications of this are dangerous [4], [5]. Especially in the countries where homosexuality is still a criminal offence, and AI which outs you just by looking at your facial expressions- this could set a terrible precedent to any hopes of getting homosexuality decriminalised, and could strengthen the already existing institutional bias against LGBT folks. What was even more surprising about the model, is that it was 81% accurate in correctly identifying men, and 74% for women, using just a single facial feature. With five facial images of a person, this rate increased to 91%. All of these rates are far ahead of identification done by humans.

Another major misfire by AI in face recognition is the discrimination against people of colour [6], which has countless examples of not recognising people of colour as accurately as white males. Detroit Police wrongfully arrested a man of colour using facial recognition software [7] with an inherent bias. There are countless others who are being subjugated to police searches after using AI as a facial recognition software. Another facial recognition software misidentified a person of colour behind bombings in Sri Lanka. The college student received severe abuse when his name was leaked to the public, all over a faulty output.

Joy Buolamwini, the founder of Algorithmic Justice League, an organization trying to raise awareness about the social implications of AI, used AI facial recognition models from a few leading companies like Amazon, Microsoft, etc [8]. All the models performed poorly for minority groups and women. In fact, some of the results were just shocking. Figure 1 shows how two famous black women, Oprah, and Michelle Obama, were labelled by the leading facial recognition software as “appears to be male”, and “young man wearing a black shirt”. The reasons for the underwhelming performance are explained later in the article.

a)Amazon Face Recognition                                                                                        b) Microsoft Face Recognition

Figure 1: Discrimination by Facial Recognition Software

These episodes lead to a necessary discussion about the use of facial recognition software, especially with AI development still in the nascent stage, and above all, such software being unregulated. There haven’t been laws against the use of such software yet, and there has been no proper consolidation of rules regarding the ethics of using image recognition. Yes, there are many benefits to using face recognition software, but currently, the problems outweigh them. The accuracy of any data-based model depends on the data it is fed; facial recognition software depends on the faces provided to it. We as AI researchers dive into every new technology and application we feel AI can be applied to, however, there is a need to stop and have a look around at the legal, moral and philosophical aspects of the technology we are building, before we actually start with our models.

We arrive at the same question again, Do we really need facial recognition? Is AI for detecting faces is at the right threshold to be used effectively in mainstream applications? Who is responsible if a person is discriminated against by a facial recognition algorithm? Who is to blame, when AI accidentally outs the sexual identity of a person, when they are not ready to come out yet? Humans can be taught to be less racist, less sexist and less homophobic, by education. Who is going to teach the AI? On the other hand, is the AI really at fault, being fed predominantly white faces as input to recognise who enters a building, in a cosmopolitan city? Maybe, instead of trying to get that validation accuracy of 99%, instead of trying to  focus on pushing publications, we should have a look at ourselves and ask, are we doing it right? Are we working with the right data? Maybe the fact that AI discriminates based on race, gender, colour, is that because we are letting our human biases into the dataset [9]. AI is fast becoming a proverbial looking glass, reflecting the society’s values. However, with the social media revolution, the extremities of the society are amplified and reach more people than ever. This opens a box of questions. Have we thought about the implications of what we are creating? Have we thought if the law frameworks exist to regulate AI? Have we thought of the ethical implications of the dataset we are trying to collect? So many tech companies are shutting down their facial recognition software. Amazon has decided to shutdown Recognition, IBM has shut down its facial recognition research as well, both because of human rights concerns. Instead of shutting down the research altogether, we could build more robust data to fix historical societal problems. We should focus more on building the legal and ethical foundations and understand more on how we use our data.

References

[1]Luke Dormehl. “Surveillance on steroids: How A.I. is making Big Brother biggerand brainier”. In: (2019).url:https://www.digitaltrends.com/cool-tech/ai-taking-facial-recognition-next-level/.

[2]Julien Lauret. “Amazon’s sexist AI recruiting tool: how did it go so wrong?” In:(2019).url:https://becominghuman.ai/amazons-sexist-ai-recruiting-tool-how-did-it-go-so-wrong-e3d14816d98e.

[3]Yilun Wang and Michal Kosinski. “Deep neural networks are more accurate thanhumans at detecting sexual orientation from facial images.” In:Journal of person-ality and social psychology114.2 (2018), p. 246.

[4]Sam Levin. “New AI can guess whether you’re gay or straight from a photograph”.In: (2017).url:https://www.theguardian.com/technology/2017/sep/07/new-artificial-intelligence-can-tell-whether-youre-gay-or-straight-from-a-photograph.

[5]Paul Lewis. “I was shocked it was so easy”. In: (2018).url:https : / / www .theguardian.com/technology/2018/jul/07/artificial-intelligence-can-tell-your-sexuality-politics-surveillance-paul-lewis.

[6]Fabio Bacchini and Ludovica Lorusso. “Race, again: how face recognition technology reinforces racial discrimination”. In:Journal of Information, Communication and Ethics in Society(2019).

[7]V Steeves J Bailey J Burkell. “AI technologies — like police facial recognition —discriminate against people of colour”. In: (2020).url:https://theconversation.com/ai-technologies-like-police-facial-recognition-discriminate-against-people-of-colour-143227.

[8]Joy Buolamwini. “Arti cial Intelligence Has a Problem With Gender and RacialBias. Here’s How to Solve It”. In: (2019).url:https://time.com/5520558/artificial-intelligence-racial-gender-bias/.

[9]Brendan F Klare et al. “Face recognition performance: Role of demographic information”. In:IEEE Transactions on Information Forensics and Security7.6 (2012),pp. 1789–1801