top of page

Facial recognition technology: an instrument reinforcing racial hierarchies?

Facial recognition technology: an instrument reinforcing racial hierarchies?


Today, Artificial Intelligence (AI) is rapidly infiltrating every aspect of our society. Recent developments in technology have enabled more sophisticated and complex algorithmic decisions, leading to a growing usage of AI to support decision-making in many areas which has traditionally been performed by humans.[1] Facial recognition technology is one of the domains which is evolving considerably[2] and their wide-spread use is based on the promising idea that AI is capable of making more objective and fairer decisions, in removing disadvantageous biases and prejudice that humans are likely to carry. However, growing evidence is demonstrating that facial recognition technologies pose a risk of reinforcing, propagating, and perpetuating discrimination. A highly mediatized example is that of Google, whose image-recognition photo app mistakenly labeled a black couple as being “gorillas”.[3] Several studies conducted on major facial recognition systems have all shown that people of color are disproportionately more likely to be miscategorized or unrecognized.[4]

One of the major causes of this bias is the lack of diversity in training sets.[5] Training sets are examples of faces which teach computers how to recognize other faces, and AI systems are only as smart as the data used to train them. Thus, if there are more white faces than black in the training set, for example, the system will have a harder time identifying people of color. More fundamentally, the data used in the development of these technologies is a reflection of the inequalities which exist within our society, because humans are the ones that collect and label the data which goes into these systems. Consequently, existing human biases in the real world are encoded into AI systems.[6] Moreover, the ability of AI systems to recognize patterns and create their own set of rules enable them to identify millions of factors that can be used to generate decisions, but these factors may be unintelligible to humans, since we have yet to find a method to identify the factor triggering a decision in the labyrinth of numbers. This is often referred to as the “black box” problem, and this incapacity to provide an explanation for a decision made by AI can constitute a barrier for effective accountability in cases where a person is affected negatively by an output of the system.

Uncritically adopting these technologies may hence have catastrophic impacts on the individuals and their rights, as pointed out by the UN High Commissioner of Human Rights.[7] To begin with, AI systems can reinforce racial biases, and what is worse, “wrongfully suggest that biases are natural”[8], leading to the amplification of existing relations of domination.[9] More concretely, given the wide-spread use of facial recognition systems in the world today, differential accuracy can heighten the risks of underrepresented groups to be disproportionately targeted by law enforcement authorities, or create additional obstacles for these groups in accessing key services. If we take into consideration the fact that AI systems are increasingly being integrated into autonomous weapon systems (AWS), the implications of systematic biases on humanitarian principles cannot be neglected either. How can we trust AWS to accurately distinguish combatants from civilians if people of color are more likely to be misrecognized? This can especially be problematic, because in cases of commercial AI use, there is a possibility of overturning unjust algorithmic decisions through legal actions (although the procedures often prove to be difficult and the moral damage could be considerable), whereas in the military context, once a target is misclassified, the results could be fatal and irreversible.

In spite of all the issues raised above, we cannot completely ignore the potential advantages of facial recognition systems. For example, humanitarians may apply them to expand their capacity, such as their use in reuniting separated families. In order to avoid exacerbating biases, recognizing that these technologies are not neutral, but rather a product of existing relations of inequality is essential, and all actors involved need to commit to the planning, proactive identification, and active monitoring of these systems, in order to ensure the respect of human rights and humanitarian principles. In more concrete terms, at the level of AI developers, it is indispensable to create training sets which reflect the full spectrum of the population, in addition to improving the diversity of the workforce, responsible for AI development.[10] Resolving the “black box” problem by advancing the explainability of AI decisions, as well as conducting continuous assessments and monitoring on their impact is also vital, in terms of transparency and accountability. However, a purely technical approach is not enough to tackle the issue, and wider policy and legal contexts need to equally be taken into account. The adoption of legislative and regulatory frameworks, which set out clear guidelines for AI development and ensure access to effective remedies, is crucial. There are already several initiatives of standard-making from the civil society level to the international level[11] and they need to be further promoted.

These measures are not exhaustive, but they provide a starting point for preventing and limiting harm while maximizing the benefits of facial recognition systems, and more generally, AI technology. We must not forget that since AI systems are nothing but a reflection of our society today, addressing biases in emerging technology is not possible without actively considering and working to tackle racism in the real world. It is the responsibility of all actors to create a world in which technology works for all, serving not to entrench existing biases, but to promote equality.



By Liya ALIEVA, Océane FOUQUEAU, Carla LECLERE and Yuma YAMAMOTO


REFERENCES


[1] This includes decisions on who is hired or fired in a company, who is granted a loan, who is admitted into a school, or how long an individual spends in prison as pointed out by Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, (Broadway Books, 2017); Danielle Keats Citron and Frank Pasquale, The Scored Society: Due Process for Automated Predictions, Washington Law Review, 89, 2014, accessible here.

[2] Facial recognition technology refers to “algorithmic systems (and the associated hardware) that can analyze a person’s face to make a claim of an identity”, and are used for various activities today, from everyday tasks such as unlocking our smartphones to law enforcement operations for surveillance and crime prevention purposes. See: Centre for Data Ethics and Innovation, Snapshot Series: Facial Recognition Technology, May 2020, accessible here.

[3] Steve Lohr, Facial Recognition Is Accurate, if You’re a White Guy, The New York Times, February 9, 2018, accessible here.

[4] Joy Buolamwini, a researcher from Massachusetts Institute of Technology (MIT), conducted a research study which examines three major commercial face recognition systems, to find that with regard to photos of white men, the error rate of these systems is less than 1%, but when it comes to images of women with darker skin, the rate rises to approximately 35%. The American Civil Liberties Union (ACLU) did another study on the subject by conducting tests that matched photos of U.S. Congress members against a database of 25,000 publicly available “mugshots” of criminals, and the results showed that people of color were disproportionately more likely to be misidentified as criminals. For more details on these research studies, see: Joy Buolamwini and Timnit Gebru, Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Proceedings of Machine Learning Research, 81, 2018, accessible here; Jacob Snow, Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots, American Civil Liberties Union, July 26, 2018, accessible here.

[5] Noel Sharkey, The impact of gender and race bias in AI, Humanitarian Law & Policy, August 28, 2018, accessible here.

[6] Genevieve Smith and Ishita Rustagi, When Good Algorithms Go Sexist: Why and How to Advance AI Gender Equity, Stanford Social Innovation Review, March 31, 2021, accessible here.

[7] Office of the United Nations High Commissioner for Human Rights (OHCHR), The right to privacy in the digital age: Report of the United Nations High Commissioner for Human Rights, A/HRC/48/31 (September 31, 2021), accessible here.

[8] Jack Bahn, Combatting Racial Discrimination in Emerging Humanitarian Technologies, International Organization for Migration (IOM), March 22, 2021, accessible here.

[9] This tendency risks being exacerbated with the AI system’s feedback loops, because these systems are often programmed to incrementally retrain themselves on the new data generated, through the use of earlier decisions made by the system.

[10] A recent study shows that diverse demographic groups are better at decreasing algorithmic bias, since they are more capable of interrogating biases that may arise throughout the process of developing, deploying and operating an AI System. See: Bo Cowgill et al., Biased Programmers? Or Biased Data? A Field Experiment in Operationalizing AI Ethics, Proceedings of the 21st ACM Conference on Economics and Computation, July 2020, accessible here.

[11] One example of an initiative at the civil society level is that of the Humanitarian Data Science and Ethics Group, whose “Framework for the Ethical Use of Advanced Data Science in the Humanitarian Sector” and “Decision Tree” contribute to providing a set of ethical guidelines that encourage system developers to identify and tackle biases. At the international level, UNESCO’s AI Decision Maker’s Toolkit, for example, aims to push decision-makers to ensure a human rights-based and ethical development of AI, in providing recommendations, implementation guides, and capacity building resources.



BIBLIOGRAPHY


Articles and documents

Bahn, J., Combatting Racial Discrimination in Emerging Humanitarian Technologies, International Organization for Migration (IOM), March 22, 2021, accessible here [accessed on: February 25, 2022].

Buolamwini, J., and T. Gebru, Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Proceedings of Machine Learning Research, 81, 2018, accessible here [accessed on: February 25, 2022].

Chandler, K., AI is Often Biased. Will UN Member States Acknowledge This in Discussions of Autonomous Weapon Systems?, Global Observatory, September 20, 2021, accessible here [accessed on: February 28, 2022].

Citron, D. K., and F. Pasquale, The Scored Society: Due Process for Automated Predictions, Washington Law Review, 89, 2014, accessible here [accessed on: February 28, 2022].

Coppi, G., R. M. Jimenez, and S. Kyriaz, Explicability of humanitarian AI: a matter of principles, Journal of International Humanitarian Action, 6, 2021, accessible here [accessed on: February 28, 2022].

Cowgill, B., F. Dell’Acqua, S. Deng, D. Hsu, N. Verma, and A. Chaintreau, Biased Programmers? Or Biased Data? A Field Experiment in Operationalizing AI Ethics, Proceedings of the 21st ACM Conference on Economics and Computation, July 2020, accessible here [accessed on: February 28, 2022].

Dawes, J., UN fails to agree on ‘killer robot’ ban as nations pour billions into autonomous weapons research, The Conservation, December 20, 2021, accessible here [accessed on: March 1, 2022]

Garvie, C., A. Bedoya, and J. Frankle, The Perpetual Line-Up: Unregulated Police Face Recognition in America, Georgetown Law Center on Privacy & Technology, October 18, 2016, accessible here [accessed on: February 28, 2022].

Humanitarian Data Science and Ethics Group, A Framework for the Ethical Use of Advanced Data Science Methods in the Humanitarian Sector, April 2020, accessible here [accessed on: March 5, 2022].

Humanitarian Data Science and Ethics Group, Decision Tree for Ethical Humanitarian Data Science, Data Science and Ethics Group, accessible here [accessed on: March 5, 2022].

Klare, B. F., M. J. Burge, J. C. Klontz, R. W. Vorder Bruegge, and A. K. Jain, Face recognition performance: Role of demographic information, IEEE Transactions on Information Forensics and Security, 7(6), 2012, accessible here [accessed on: February 28, 2022].

Lohr, S., Facial Recognition Is Accurate, if You’re a White Guy, The New York Times, February 9, 2018, accessible here[accessed on: February 25, 2022].

O’Neil, C., Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Broadway Books, 2017.

Sharkey, N., The impact of gender and race bias in AI, Humanitarian Law & Policy, August 28, 2018, accessible here[accessed on: February 25, 2022].

Smith, G., and I. Rustagi, When Good Algorithms Go Sexist: Why and How to Advance AI Gender Equity, Stanford Social Innovation Review, March 31, 2021, accessible here [accessed on: February 28, 2022].

Snow, J., Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots, American Civil Liberties Union, July 26, 2018, accessible here [accessed on: February 28, 2022].

Tellier, M., « États-Unis : la reconnaissance faciale accusée de favoriser les biais racistes », France Culture, June 14, 2020, accessible here [accessed on: February 25, 2022].

United Nations, Urgent action needed over artificial intelligence risks to human rights, UN News, September 15, 2021, accessible here [accessed on: February 25, 2022].

United Nations Educational, Scientific and Cultural Organization (UNESCO), Building Institutional Capacity in Public Policy Development in the Field - A Decision Maker’s Toolkit of AI, UNESCO, accessible here [accessed on: March 1, 2022].


Reports

Centre for Data Ethics and Innovation, Snapshot Series: Facial Recognition Technology, May 2020, accessible here [accessed on: March 1, 2022].

Centre for Data Ethics and Innovation, Review into bias in algorithmic decision-making, November 2020, accessible here [accessed on: March 1, 2022].

Office of the United Nations High Commissioner for Human Rights (OHCHR), The right to privacy in the digital age: Report of the United Nations High Commissioner for Human Rights, A/HRC/48/31 (September 31, 2021), accessible here [accessed on: February 28, 2022].



Featured Posts
Articles récents
bottom of page