Facing Bias and Ethical Challenges in Facial Recognition Technology

By Nicole Martinez-Martin

Image courtesy of Pikrepo
In recent months, the movement to protest police brutality and systemic racism has brought attention to the use of facial recognition technology (FRT) for mass surveillance and racial profiling. Subsequently, Microsoft and Amazon announced it would stop selling FRT to law enforcement, and IBM scaled back their development and sale of FRT systems.  Although much of the criticism of FRT has focused on its use in identifying and tracking people for law enforcement purposes, FRT’s application to recognition of emotion, personality traits, and behavioral health also merits ethical scrutiny. In May of this year, researchers at Harrisburg University announced that they had developed software that could predict whether a person is a criminal based on a picture of their face. This news was met with a swift wave of criticism, calling the project an updated phrenology, both unrealistic in its stated goals and almost certain to involve racial bias. The researchers’ subsequently revised their claims to address the criticism, but the incident demonstrates the need for efforts to address the ethical development and use of behavioral applications of FRT.  As FRT is being applied to broader uses, particularly assessment of behavior and emotion, it is critical to examine the potential for misuse and for racial bias within these systems, and whether some applications should

Facial recognition systems utilize algorithms to analyze images of human faces to make identifications or assessments of specific features. In recent years, FRT systems have also been utilized for medical and behavioral applications, such as emotion recognition or identifying whether a person is in pain. FRT is being used for diagnosis of genetic disorders and for identifying mental health conditions, such as depression, schizophrenia or autism. Facial recognition software can be applied for employment purposes, such as identifying facial characteristics to assess a candidate’s personality or “culture fit.” FRT is also being developed for lie detection, which could be used by employers or law enforcement.

Image courtesy of Wikimedia Commons
FRT has raised concerns regarding privacy and civil liberties due to its uses for tracking and surveillance purposes. Because images can be collected passively, individuals may not know their image is being used for FRT or have opportunity to provide consent. Another pressing area of ethical concern in FRT is the potential for bias and discrimination. As is the case for machine learning algorithms in general, racial bias has been found to present challenges for FRT systems. One reason for algorithmic bias is a lack of diversity in training data. Databases used for training the FRT algorithms have often contained significantly fewer images of people who are not white men, which then can result in biased results and impact from the resulting FRT application. A study by the National Institute of Standards and Technology found that many FRT algorithms were 10 to 100 times more likely to incorrectly identify a photo of a Black or East Asian face, compared to that of a white one. In 2018, researchers, Joy Buolamwini and Timnit Gebru, found that facial recognition algorithms developed by IBM and Microsoft had error rates of .3% in classifying white men, but error rates up to 21-35% in classifying darker female faces.

Examinations of the ethics of FRT must engage the range of proposed applications. With FRT behavioral applications, a lack of diverse training data of sufficient measures to account for race, gender, or ethnicity could produce biased results and tools with discriminatory impact. For example, a study found that an FRT tool used for diagnosis of Down’s Syndrome recognized 80% of cases among white Belgian children but only 37% of cases among Black Congolese children. It will also be important to recognize how cultural and social context influence the intended use of an FRT tool and my lead to discrimination. A human resources algorithm deployed to assess a potential employee’s “fit” is likely to reflect existing racial and gender biases regarding who “fits” into certain jobs and businesses. FRT used for behavioral and health assessments thus have potential to create disparate impact for marginalized groups in terms of employment, education, and health, as well as law enforcement.

The Association for Computational Machinery (ACM), the world’s largest scientific computing society, recently issued Principles for the Development, Evaluation and Use of Unbiased Facial Recognition Technologies. The ACM stressed the need for FRT developers to address accuracy, transparency, governance, risk management and accountability before proceeding with FRT. The ACM’s recommendations include ensuring that a system’s biases and inaccuracies are understood and addressed before that system is used to make decisions that affect people’s civil rights, as well as providing for third-party auditing of systems. Additionally, the ACM recommends that the reported error rates in FRT should be “disaggregated by sex, race, and other context-dependent demographic features where relevant.” A proposed Senate bill, Facial Recognition and Biometric Technology Moratorium Act attempts to limit FRT and biometric surveillance by the federal government. Guidelines and regulation for FRT should be broad enough to address the range of potential applications, including behavioral and medical uses, and also address applications in the private sector, as well as by government and law enforcement. Before moratoriums are lifted, it will be important to include mechanisms for ongoing accountability and review.

Image courtesy of Pixabay
The ethical problems regarding FRT underline the need to not just focus on high-level principles, but also to go further in integrating ethics into how AI products are made and used, such as ensuring processes to anticipate and prepare for the potential societal impact or dual use of the technologies. FRT projects like the efforts to predict criminality from a photo or, a few years ago, Stanford researchers’ use of FRT to identify sexual orientation would seem to present obvious ethical red flags beforehand. That these projects went forward demonstrates the importance of development processes and training that include anticipation of potential misuse or discriminatory impacts of projects. There needs to be room to evaluate whether a given FRT tool should be developed at all if the potential for bias cannot be sufficiently mitigated. Computer scientists like Joy Buolamwini, Timnit Gebru and Deborah Raji provided evidence of racial bias in FRT and voiced objections to police use of the technology for years, but faced resistance to their efforts within the industry. Even though there is now recognition of the validity of these concerns regarding FRT, it seems to have taken extraordinary societal events for the field to take more sustained action to address racial bias and the potential for misuse of FRT. The path towards ethical development and use of FRT calls for not only the participation of more diverse researchers in computer science, but also for change within industry and academic institutions to better support those working against racist and discriminatory practices. As FRT moves into applications to assess health, behavior and personality, it will be important to ensure transparency and engage diverse stakeholders in order recognize how these tools may result bias or have the potential for misuse, and anticipate ethical challenges.

______________

Nicole Martinez-Martin, JD, PhD is an associate professor at Stanford’s Center for Biomedical Ethics. Her research interests include the use of AI and digital tools for mental health care, as well as the impact of behavioral technologies on minority groups and vulnerable populations. 






Want to cite this post?

Martinez-Martin, N. (2020). Facing Bias and Ethical Challenges in Facial Recognition Technology. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2020/09/facing-bias-and-ethical-challenges-in.html

Emory Neuroethics on Facebook

Emory Neuroethics on Twitter

AJOB Neuroscience on Facebook