Mitigating ethical concerns is good for business
Facial recognition has become a technology that most of us rely on every day. It’s used for password substitution on our mobile phones; social media platforms like Facebook use facial recognition to identify users in uploaded and shared photos; and, it is also used on self-checkout machines at many retail outlets like Walgreens and Walmart. (nec.co; thalesgroup.com)
It is instructive to note that most Americans support the use of the technology for security purposes, according to a survey by the Center for Data Innovation. The majority of respondents, 54.3%, support the use of facial recognition by airports for safety screenings. Among respondents, 54.8% said facial recognition should not be limited as long as it added to public safety.
But, as facial detection, recognition, and analysis technologies have become pervasive in our society there are also very real concerns about how our facial images are used. Additional concerns include lack of consent in terms of allowing our facial images to be captured and filed in a database; and, how algorithms may incur unforeseen consequences to our personal privacy through unintended bias.
So, what is algorithmic bias? Algorithmic bias refers to systematic errors that can lead to “unfair” outcomes, such as favoring one category over another, contrary to the algorithm’s intended function. (Wikipedia.org)
For this article, we will focus on the differences between facial recognition and analysis; consider how to mitigate algorithmic bias, and finally discuss what an app called WTF? What the Face gets right with regard to privacy, user consent, and reducing bias.
What is the difference between Facial Recognition and Facial Analysis?
Both facial recognition and facial analysis begin with facial detection. Using artificial intelligence (AI) applications, facial detection identifies human faces in digital images. When the application recognizes an image as a face, facial recognition or facial analysis can be performed.
Facial recognition identifies or verifies an actual person. When a face is detected, facial recognition software examines the face using facial landmarks or details to create a mathematical representation of the face—which is then compared with a database of faces to find a match and identify the individual.
Facial analysis uses the same facial landmarks used in facial recognition. The goal of facial analysis is to analyze the facial expressions of a person to interpret emotions without identifying the person. (www.algoface.ai) There are two different types, or approaches, to facial analysis: geometric feature-based methods and appearance-based methods. The geometric facial features present the shape and locations of facial components (including mouth, eyes, brows, and nose). Appearance-based algorithms focus on transient features (wrinkles, bulges, forefront) which describe the changes in face texture, intensity, histograms, and pixel values. (ietresearch.com, May 2019)
Mitigating Algorithmic Bias in Facial Recognition and Analysis
To avoid the risk of losing customers and damaging a brand’s reputation, companies should be mindful of how AI algorithms could be biased toward certain groups.
In “What is AI Bias really, and how can you combat it?” (Victoria Shahskina, 2021) discusses the reduction of bias in machine learning algorithms and suggests practical steps that companies can immediately take to accomplish that goal:
- Consider the context: Some industries and use cases of machine learning have a record of creating biased systems. Being aware of where AI has struggled in the past can help companies build fairness into their projects from the beginning
- Design AI models to be inclusive: To mitigate bias in AI models, engage with researchers who study human judgment and behavior. Set measurable goals for AI models to perform equally well across planned use cases
- Train your AI models on complete, representative data: Create clear guidelines for collecting, sampling, and preprocessing the training data. Also set criteria for spotting discriminatory correlations and potential sources of bias in our datasets
- Perform targeted testing: While evaluating your models, test their performance across subgroups to uncover any problems that might be masked by aggregate measures. Also, perform a series of stress tests to evaluate how the model performs in complex cases. Keep in mind that you should continually retest your models as you gain more real-life data and get feedback from users
- Enhance human decision-making: AI can identify bias in human decision-making. Accordingly, AI models trained on recent human decisions or behavior that show bias are important because they help organizations to improve future processes based on the information obtained
- Improve AI explainability: Additionally, keep in mind that the issue of explainability—i.e., understanding how AI generates predictions and what features of the data it uses to make decisions—is very important. Understanding whether the factors supporting the decision reflect AI bias can help in identifying and mitigating prejudice
WTF? Why The Face - What They got Right
This article has been about facial recognition, analysis, and the consideration of algorithmic bias. This final section will discuss an app called WTF? Why The Face, ostensibly to examine what this app gets right in relation to diminishing bias and ensuring user consent which, if considering a broader business application, may prevent liability issues and the possibility of reputational damage and future regulation.
The SilverLogic developed the app in collaboration with Dr. Todd Frisch and his daughter Abbie Frisch Belliston based on their book, WTF? Why the Face: A Practical Guide to Understanding Health and Personality through Facial Diagnosis. The book teaches the reader how to use face-reading techniques to build stronger connections with coworkers, friends, family members, customers, and others with whom you interact daily.
The app allows the user to upload photos and have them analyzed, bringing a unique and interesting way of self-understanding by analyzing facial characteristics. Diagnosticians can use it to help with the diagnosis of medical issues but it can also be a powerful tool for businesses, sports teams, film and television casting, mental health professionals, and schools.
Regarding the issues of privacy, consent, and algorithmic bias, the WTF? Why The Face app gets many things right:
- As part of the terms and conditions of the app, anyone that uses the app for personal or professional purposes opts into the system, consents to have their faces scanned, stored, and analyzed, and pays for that service
- Everyone that has their face scanned and receives the results of a facial scan can leave feedback on the process and results
- And, finally, to mitigate algorithmic bias and maintain the quality of the analysis there is a human in the facial analysis loop
The purpose of the WTF? Why The Face app is to educate users and/or participants as to what the analysis of their faces means for them and them alone. This is in stark contrast to some business applications where the person's face is scanned and stored sometimes without explicit consent, where privacy may be compromised, and where algorithmic bias is possible
So, while facial recognition technology has very real benefits for tech, business, law enforcement, and many other industries, there must be a concerted effort to mitigate the ethical issues of privacy, consent, and algorithmic bias and alleviate the public’s concern with how this biometric data is used, stored, and how long it will be stored. Innovation in the sphere of facial detection, recognition, and analysis should also be coupled with responsibility and regulation. Companies that innovate and take into account the ethical issues of this technology will be better able to shape how this technology evolves in the future for the betterment of everyone. (Gemalto, 2020)