Facial recognition has been used by various organizations, both private and government-owned around the world. Facial recognition software can be used to improve software. However, its ubiquity has given way to some negative impact on the society such as prejudice against minority groups. An important condition for using facial recognition software legitimately is that it be accurate when used for all demographic groups.
With that in mind, Sergio Escalera, from the Human Pose Recovery and Behavior Analysis group at the Computer Vision Center (CVC) at the University of Barcelona led a team of researchers in the organization of a challenge in the European Conference of Computer Vision (ECCV) 2020. As required by the challenge, participants submitted algorithms that could execute facial recognitions tasks in the presence of confounding attributes. The results which were recently published in Computer Vision- ECCV 2020 Workshops, examined the algorithms’ precision and prejudice towards skin color and gender in light of the other present attributes.
For Julio C. S. Jacques Jr., Researcher at CVC and the Universitat Oberta de Catalunya’s (UOC) Scene Understanding and Artificial Intelligence (SUNAI) Lab and at the Faculty of Computer Science, Multimedia and Telecommunications, the challenge was a success. ‘It attracted 151 participants, who made more than 1,800 submissions in total, exceeding our expectations regarding the number of participants and submissions’, he said.
The participants made use of an unbalanced image dataset, which simulates a real-world scenario where AI-based models are trained and examined based on imbalanced data (more white males than dark females). The participants worked with 152,917 images depicting 6,139 identities.
The images were then annotated for two protected attributes―gender and skin color― and five other legitimate attributes: age group (0-34, 35-64, 65+), head pose (frontal, other), image source (still image, video frame), wearing glasses and a bounding box size.
The results that were acquired look quite promising. The winning solutions achieved 0.999 accuracy along with very low scores in the proposed bias metrics, which is an advancement towards the development of less prejudiced facial recognition methods. An evaluation of the top ten teams showed a higher rate of false positives for females with dark skin tones and for samples with where both individuals wore glasses.
On the contrary however, there were higher false negative rates for males with light skin tones and for samples where both individuals were under 35.
‘This was not a surprise, as the adopted dataset was not balanced with respect to different demographic attributes. However, it shows that overall accuracy is not enough when the goal is to build fair facial recognition methods, and that future work on the topic must take into account both accuracy and bias mitigation’, Jacques concluded.
By Marvellous Iwendi.