Racial and Gender Biases in Machine Learning Models

Informational article by Afia Zahin

What Roe v Wade Means for Human Rights | Human Rights Watch

In 2015, Amazon realized that the algorithm it used for hiring employees was biased against women. You might wonder how this happened - how a company known for technology could create such a faulty process. The algorithm was based on historical data, in this case, the successful resumes submitted over the past years. As most of the applications were male, it taught itself to favor men over women. 


In the modern day, it is simple to overlook the dark side of AI while paying more attention to its fast decision-making. While many push ‘unbiased feedback’ as one of the best parts of AI, this is nothing more than an illusion. In October 2019, researchers found that an algorithm used on more than 200 million people in US hospitals to predict which patients would likely need extra medical care heavily favored white patients over black patients. 

 

Eritrean computer scientist Timnit Gebru, former co-lead of Google’s Ethical AI Team, identified this dark side, finding that it discriminated against women and people of color. In December 2020, she co-authored a groundbreaking paper on “the dangers of stochastic parrots.” This paper, which ultimately got her fired from Google, highlighted the biases in large language models (LLMs) and detailed potential risks associated with large language processing models, including over-relying on data from wealthy countries that have more internet access. She argued that AI-generated language will be homogenized, reflecting the practices of the richest countries and communities. This research could have been problematic for Google, which is a leader in AI development. According to Google’s lead, Jeff Dean, it was published without internal review. While the company attempted to portray Gebru’s departure as a voluntary resignation, Gebru stated that she was forced to leave. 

   

But that didn't stop her from working towards a vision of a world with ethical AI. Timnit has joined Northwestern University as an assistant professor on AI ethics and the social impacts of technology. She encourages the future generation to learn about ethics in AI and motivates them to fight for equality in the tech world. In 2021, she founded Dair with a vision for an independent research organization to examine AI’s societal harms, particularly for marginalized communities. Her work still includes algorithmic bias, transparency, and ethical AI governance. She continues advocating on the topic of the risks of language models to this day and fearlessly raises her voice against discrimination.


Dr. Joy Buolamwini, a Canadian-American computer scientist, realised that a certain model couldn’t detect her face because of her dark skin while working on a facial recognition-based art project at the MIT Media Lab. This problem inspired her to run a research project on gender and skin tones, carefully examining facial analysis systems from IBM and Face++. This study revealed that while facial recognition has an error rate of only 1% on white people, dark skinned women were misclassified at a rate of 47%.   

                

When she found that these discriminatory situations stem from data imbalances, Buolamwini introduced the Pilot Parliaments Benchmark, a diverse dataset designed to address the lack of representation in historically used AI training sets, which are typically composed of over 75% male and 80% lighter-skinned faces. This ensures more equitable testing standards. She also founded the Algorithmic Justice League (AJL)in 2016, an organization that works to challenge bias in decision-making software, using art, advocacy, and research to highlight the social implications and harms of artificial intelligence (AI). 


It doesn't matter how biased yet discriminatory AI can be towards women and people of colour. Major tech companies like Google will always try to cover these issues up in any way possible. However, they can never make the public forget the consequences of biased algorithms or AI without ethics. The Gebrus and Buolamwinis of the world will always speak up.