Gender and Racial Bias in Computer Vision

Image bias based on gender and race Models and algorithms created by engineers fail to offer the best services to persons of a particular gender or ethnicity, a phenomenon known as cognitive AI bias. Even recognition systems in picture AI frequently fail to find and identify underrepresented racial groups and genders of humans. This is a very serious problem, especially for systems designed to provide equitable and transparent service to all members of society.
June 28, 2022

A concept or object is biased when it receives an outsized amount of support or opposition, frequently in an unjust, prejudiced, or closed-minded manner. Biases may be ingrained or acquired. Biases for or against a person, a group, or a belief can arise in people. A bias is a type of systematic inaccuracy in research and engineering. An unfair sample of a population or an estimating procedure that does not produce results that are generally correct are the causes of statistical bias.

Biased computer vision systems result in a ton of errors. Due to the absence of ethnic balance in face identification models, consider the example of Google Photos labeling human faces as "gorillas" or Flickr labeling them as "apes." Another well-known example is Twitter's saliency detection technique, which favors the cropping of photographs with white faces over those with black ones. Zoom has also received criticism for its virtual backdrops feature, which erased a black professor's head without recognizing it.

Biased computer vision systems result in a ton of errors. Due to the absence of ethnic balance in face identification models, consider the example of Google Photos labeling human faces as "gorillas" or Flickr labeling them as "apes." Another well-known example is Twitter's saliency detection technique, which favors the cropping of photographs with white faces over those with black ones. Zoom has also received criticism for its virtual backdrops feature, which erased a black professor's head without recognizing it.

What Leads To Racial And Gender Bias In Images? 

AI models and algorithms can only identify and recognize things (or persons) that they have been trained to detect and recognize. Almost all image AI models and algorithms currently in use have been trained and evaluated using enormous amounts of data. Making such databases takes a lot of effort and money. If the model or algorithm is being designed to serve the public, it needs high-quality headshot images of numerous individuals that differ in terms of gender and color. Most engineers prefer to just use open or closed source datasets where much of the data collection has already been done for them because of the stress involved in creating such enormous datasets.

What they overlook is the requirement for adequate data on people of nearly every gender and color, even though their models and algorithms will perform with extremely high accuracy. If you exclusively used data from white individuals to train your models and algorithms, they will be ineffective when applied to black people. Genders work the same way. Making sure that all potential racial and gender groups that might utilize your service or product are represented in your training and testing is entirely your organization's responsibility. We can finally have objective systems, models, or algorithms in cognitive AI once these issues have been resolved.

Common Situations in Which AI Systems Have Demonstrated Cognitive AI Bias

Numerous individuals have become aware of the cognitive bias that most AI systems exhibit in recent years and have taken steps to publicly declare them in order to make the public aware of them and, as a result, pressure the firms responsible to rectify their AI systems. I'll highlight a couple of those tweets so that anyone who is interested in learning more about what transpired during the open testing can do so.

  1. Zoom Issues With Black Face

When virtual backgrounds are utilized, Zoom cannot recognize black faces and erases them. When switching to a virtual background, Twitter user Colin Madland discovered that a black faculty member's face had been removed using Zoom's facial recognition software. The photograph that documents the event is seen below.

  1. Twitter Cropping out Black Faces in Posts

Twitter uses a facial recognition algorithm that edits uploaded photographs to highlight the subject's face. Many users have recently observed that when there are multiple faces in a single photograph with people of different races, the algorithm that crops images prioritizes white faces.

  1. Twitter Cropping Women Out Of Image Preview.

Earlier in 2019, VentureBeat published a tweet on Yann LeCun, Hilary Mason, Andrew Ng, and Rumman Chowdhury's predictions for AI in 2019. The women that participated in the post had their faces edited out in the image previews, as was seen. This is an illustration of the Twitter algorithm's gender bias. This is the precise tweet.

Consequences Of Cognitive Bias In AI

The gender and racial bias that comes with AI has a negative impact on the lives of many people of color and underrepresented genders over the years as we actively incorporate it into our daily lives. These systems have made many inaccurate forecasts, which have caused many people to spend their lives in prison, to be denied access to particular services, or even to pass away. 

The prevalence of stock photography, which is notorious for promoting prejudices against minorities and women, in image repositories and search results, amplifies this bias. This happens either by over- or sexualizing their representation in specific categories or by under-representing them in general categories (such as occupations, for example).

Additionally, absences may be a defining characteristic of some classes. According to this study's findings, models trained to identify photographs as "basketball" based on the presence of a black person, even though white and black persons appeared in "basketball" images with comparable frequency in one dataset. Despite the fact that the data for the class "basketball" was balanced, several other classes had a preponderance of white students while black students were underrepresented.

How to Solve Racial and Gender Bias in Cognitive AI?

Correcting the datasets we use for training and testing our systems, models, and algorithms is the first and most crucial step in combating gender and racial bias in cognitive AI. Our data gathering and storage procedures need to be reviewed, and we need to diversify our engineering teams. In order to ensure that the systems are robust and that they are respectful of every single gender and race of people who are likely to use the systems or models, we must carefully check that the systems, models, and algorithms we deploy to production have passed all tests we can carry out.

Given that race and gender are social constructs, all of these instances demonstrate how these two uses of computer vision are fundamentally problematic. The AI community has recently come to the conclusion that both of these classifications are reductionist and may significantly disturb the individuals being assessed because they are not objective visual qualities. Adopting an intersectional and decolonial perspective in order to center vulnerable communities who continue to suffer the negative effects of scientific advancement is one of the keys to managing the problem of gender and racial bias in AI.

Written by Denny Fardian
contact us

Ready to accelerate your digital transformation?

Send us an email, and we will answer your questions regarding our products and services.
Contact Us