Find your machine learning mojo!

How To Solve Algorithmic Gender Bias Problems

Gender bias in algorithmic design is an important topic when it comes to the development of systems using artificial intelligence.

There have been multiple high profile examples of algorithms perpetuating a variety of different biases when brought into production.

In this academic papers made simple; we are going to be sharing takeaways from a recent article discussing ways of preventing gender bias propagation. 

There have been many papers published on the topic of preventing bias in algorithms. However, this one puts an exciting spin on it.

In academic papers made simple, we take a look at academic papers and make them easy to digest.

It can be intimidating when you first start diving into a technical area to read the academic works. Academic papers made simple will help you dip your toe into the technological world while building up your knowledge. 

You can stay on top of the latest developments in the field of AI and machine learning without having to dive deeply into lengthy papers.

Ready to learn more about how to measure and prevent gender bias in AI?

Of course, you are – let’s dive in!

What is the paper on gender bias?

This edition of Papers Made Simple will cover ‘Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations.‘ Published in July 2019 by researchers at UCLA, University of Virginia and the Allen Institute for Artificial Intelligence. 

The paper tackles how you can measure and then mitigate against gender bias in AI.

Which bucket of artificial intelligence does the paper fall into?

The paper discusses explicitly gender bias propagation in computer vision datasets; however, the learnings apply across multiple fields.

It is crucial to not only mitigate against gender bias but also be able to measure the impact to ensure that the people understand how their system is working.

Ready to get started with Machine Learning Algorithms? Try the FREE Bootcamp

gender bias machine learning

What does the paper discuss on Gender Bias in AI?

This paper discusses how balancing a dataset so that it has equal numbers of images representing men and women doing different tasks, for example, is not enough to eliminate bias in task association.

They identify that algorithms train to rely too heavily on other items within the image that imply gender. 

These items can reinforce stereotypes seen in the real world.

To tackle these issues, the team identify a variety of different techniques that can be applied to preprocess the dataset to eliminate bias.

The team mask areas of the image, such as the person’s face, in the training data so the algorithm can’t train on it.

masking gender signals to eliminate gender bias
This image shows how the team masked signals of gender when training an algorithm to recognise different tasks. Source: link

What algorithms are used?

In the paper, they use the concept of dataset and algorithm leakage to understand the impact of gender bias in the algorithm.

This ‘leakage’ metric is one they defined as the signals that imply gender in the data set. The amount that the algorithm learns bias is measured.

The team hopes that the concepts they propose can be applied in other studies to measure and mitigate algorithm bias.

signals of gender and bias

What did they do to tackle Gender Bias in AI that’s innovative?

In this paper, they identify a crucial driver of gender bias.

They tackle over-sensitivity of the algorithm to other features when trying to identify gender. It is innovative because, unlike different approaches, they are not trying to remove just the ‘gender’ label.

The approach the team takes is adversarial debiasing. What this means is the team actively removes all the signals that could lead to over bias in data sets. For example, if there is an image of someone cooking, it is likely to contain children.

Additionally, in other pictures, children are seen with women. Therefore, the algorithm assumes that women are also associated with cooking. 

This assumption causes gender bias. Therefore the team remove these items that imply bias from the image before identifying.

In this paper, they not only mask faces but all other items they identify as driving bias in the image but don’t prevent classification.

Removing bias is a balancing act  – get it! 😉

What can you take from this work to tackle algorithmic bias?

One of the key takeaways from this work is that we cannot assume that algorithmic bias is removed by merely balancing a dataset.

Furthermore, algorithms rely on multiple points of reference to make predictions. These references often infer bias, which can be emphasized by the algorithm if used in forecasting.

You need to be proactive at tackling the impact of inferred bias in your AI!

You can continue the discussion on data ethics here.

Follow:

Leave a Reply

Your email address will not be published.

%d bloggers like this: