What causes an AI program to treat genders differently?

Artificial intelligence tends to perform worse on women, but why? What can be done to ensure that AI-related technologies work well for everyone?

Weight with symbols
Experts from UiT Machine Learning Research Group discuss what makes an AI treat genders differently and what can be done to ensure that AI-related technologies work well for everyone. Foto: Illustration generated by Adobe Firefly.
Portrettbilde av Bjørklund, Petter
Bjørklund, Petter petter.bjorklund@uit.no Kommunikasjonsrådgiver / Maskinlæring
Publisert: 07.03.24 18:27 Oppdatert: 07.03.24 18:36
Samfunn og demokrati Teknologi

There is little doubt that artificial intelligence (AI) continues to show great potential in helping us in various areas, such as providing quicker medical treatment or streamlining time-consuming tasks at the workplace.

In order for AI technology to be helpful to society, it is important that such tools work well for everyone, regardless of gender, ethnicity, or sexual orientation. However, this is an aspect in which the technology tends to fall short of, say AI researchers at UiT The Arctic University of Norway.

– AI models have often performed less accurately on female populations compared to male populations, says PhD candidate Suaiba Salahuddin. She is, along with PhD candidate Srishti Gautam and associate professor Elisabeth Wetzer, a part of the UiT Machine Learning Group and the Center for Research-based Innovation, Visual Intelligence.

Three women posing
From the left: PhD candidates Suaiba Salahuddin and Srishti Gautam, and associate professor Elisabeth Wetzer. Foto: Petter Bjørklund/UiT

The researchers use Amazon's AI-based hiring tool as an example. The tool, which was supposed to streamline hiring processes at the company, quickly proved to favor applications written by men over women.

Large research projects like "Gender Shades" have also found that facial recognition tools from tech giants like Microsoft, IBM, and Face++ recognize fewer female faces compared to male ones.

These issues may undermine people’s trust in AI technologies.

– Such issues also pose a barrier to their widespread adoption, Gautam explains.

But what makes an AI system treat people differently based on gender?

Bias in data

How an AI system chooses to behave depends on the large amounts of data, which can be in the form of images, video, or text, it is trained on. The dataset, which it uses to identify possible correlations and patterns, is always ingrained in the AI’s “brain” as it performs a given task.

Such "big data" can represent different subgroups, such as people of different genders or skin tones. Which groups are represented in the training data, and to what extent, has a significant impact on which sub-populations the AI program works better or worse on, the researchers explain.

– The selection of data that is used to train a model and how it was collected may over or under-represent individuals from specific groups, such as gender identity. This ”bad” data can then influence the training of AI algorithms negatively, producing biased and even discriminative decisions, Salahuddin explains.

Graphic of hands
How an AI system chooses to behave depends on the large amounts of data, which can be in the form of images, video, or text, it is trained on. Which groups are represented in the training data, and to what extent, has a significant impact on which sub-populations the AI program works better or poorer on. Foto: MostPhotos

Data are also "historical," which means they reflect the reality from the time they come from. They may often be several decades old, thus reflecting gender-related biases at particular points in time.

Historical data such as salary statistics may, for example, show that women have, on average, lower income compared to men. If an algorithm is trained with this data, it may quickly assume that earnings are correlated with gender, and use this bias to determine who should be approved for a loan application or not.

– With such data, the model will likely reject more applications from women than men, Wetzer mentions.

It is therefore an important rule that AI is developed in such a way that it does not emphasize gender-related features. Even so, the programs are quite adept at picking up gender-related factors that a developer may not have realized exists in the data.

– For instance, the AI tool from Amazon learned to screen out resumes which contained the word ”woman” or mentioned women-associated colleges, Salahuddin explains.

Mute “black boxes”

Another challenge with AI is that such systems often do not explain the decisions they make, thus acting as inexplicable "black boxes".

As data are large, complex, and may contain biases that one may not have thought about, this "inexplicability" can make it challenging for developers to fully understand why an AI program chooses to treat genders differently, as well as make necessary corrections to the model.

So what can be done to prevent AI from using potential biases in the data when performing their tasks?

Developing AI models that can explain the reasons behind their decisions is an important goal within the field, and is something AI researchers at UiT are actively working on as a means for combatting the bias problem in AI.

This has been investigated by Gautam in her doctoral thesis "Towards Interpretable, Trustworthy and Reliable AI," which she submitted in December last year. As the title suggests, Gautam proposed and developed novel AI methods that can assist in making AI technology less susceptible to inheriting and using potential biases in their data.

Portrait woman
Srishti Gautam proposed and developed novel AI methods that can assist in making AI technology less susceptible to inheriting and using potential biases in their data. Foto: Petter Bjørklund/UiT

One of these is the "KMEx method": a method which can make the often mute "black boxes" "explain" what elements in the training data they have focused on when performing their task - without the need for retraining the model.

The benefits of "self-explainable" models are significant. If the AI program can explain its decisions, for instance why it produces different results for women and men, it becomes easier for developers to understand what the AI is focusing on in the data.

– Adopting more "explainable" or "transparent" AI models can significantly enhance safety in critical scenarios. For instance, in cases like the Amazon recruiting model, if the model offers explanations for its decisions, such as the reasons behind a CV's rejection, it becomes easier to identify and address any underlying biases, Gautam tells.

Results from her thesis, in which “KMEx” was tested on seven different sets of image data, show that the method can be a promising tool for making these "black box" algorithms more self-explanatory.

– Marginalized and underrepresented groups stand to benefit from efforts to reduce biases in AI, aiming to ensure equitable applications and prevent the perpetuation of social inequalities, Gautam states.

Lack of diversity in the field

However, the bias problem in AI extends beyond just data. Men comprise around 70 % of those who work on AI development globally, meaning that AI-based solutions are often designed and developed by this single group.

AI programs developed by a singular group often risk forgetting to consider other groups’, such as minorities’, views on and experiences with the technology. Such AI solutions will then be based on how that homogenous group understands, experiences, and interprets the world.

– When AI systems are predominantly designed by a homogenous group, they are more prone to embed and perpetuate existing biases, potentially leading to unfair outcomes especially for underrepresented groups, Gautam mentions.

– Without incorporating diverse perspectives and experiences, it is unlikely to achieve unbiased AI decision systems, Salahuddin adds.

It is in other words important that the research field reflects the diversity of the population. Increased focus on diversity and inclusion of AI researchers of different genders, ethnicities, and sexual orientations is therefore essential for creating fair, diverse, and inclusive AI solutions.

– The field needs as many perspectives as possible such that minority voices are heard, Wetzer mentions.

However, it requires a wide range of measures to ensure a more inclusive agenda in the field.

– We need role models. The lack of diverse role models in the field is a key contributor for the lack of diversity among students, researchers, and engineers further down the line, Wetzer states.

– We need to shed light on existing leaders in the field who come from a diverse set of background, so that youths have someone to identify with and be inspired and motivated by, she adds.

– Other measures may include creating educational programs that encourage diverse participation from an early age, actively recruiting and retaining a more diverse workforce, and fostering an inclusive environment where all employees can thrive, Salahuddin adds.

The necessity of regulation

AI technology is also evolving rapidly, which has led to discussion on how the development of AI technologies should be regulated. There is currently no explicit AI law in Norway.

Fortunately, the EU countries are working tirelessly to make such legislation, which will have a significant impact on how a potential Norwegian AI law will take shape.

Such regulatory guidelines will be an important step in combatting bias in AI, which are warmly welcomed by the researchers.

– To address the issue of data bias, it is essential to implement regulations that ensure the development of AI technologies is guided by ethical and inclusive principles, Gautam concludes.

Visual Intelligence

Visual Intelligence is a Center for Research-based Innovation that is led by the UiT Machine Learning Group. The center consists of a consortium of corporate and public user partners from different business areas, such as The University Hospital of North Norway, The Cancer Registry of Norway, Helse Nord IKT, Kongsberg Satellite Services and the Norwegian Computing Center.

Bjørklund, Petter petter.bjorklund@uit.no Kommunikasjonsrådgiver / Maskinlæring
Publisert: 07.03.24 18:27 Oppdatert: 07.03.24 18:36
Samfunn og demokrati Teknologi