Face it: AI, machine learning desperately need diversity training

diversity lgbtq header

As children, we ate what our parents put in front of us. The vast majority of the time, our own tastes and preferences are largely inspired by what they liked and enjoyed; these are the foods we’re most familiar with and the ones our taste buds recognize.

The same is true for facial recognition software. The algorithms used to develop programs that serve as the foundation for scanning technology used to identify traits, characteristics or even recognize persons of interest in crimes is largely only familiar with the data they are “fed” by the people who create them.

Historically, most of these engineers have been male; predominantly white men, with little variation for people of color.

How, then, will the industry address a changing world, where ethnicities and national backgrounds are more interwoven and gender becomes less fixed and finite than it was even ten years ago?

diverse group with rainbow peace flag

Advanced technology, old-school perspective

“The world we live in is the largest practical manifestation of a collection of technologies that represent the inventiveness and needs of a very specific gender and ethicality,” says Alessandria Sala, head of analytics research for Nokia Bell Labs in Ireland. There are persistent gender inequalities in society, but also in the design process of technology used every day.

“Machine learning is starting to enable services that are becoming part of our daily routines, and once again the very core human rights of equality and inclusion are at risk of being overlooked, only on a much more significant scale,” she says.

If this learning process is left unchallenged, it can lead us to biased and limited perspectives which have practical consequences in the quality of our decision-making abilities.

Google Photos had a problem recently in which a program was unable to distinguish a person of dark complexion from a gorilla. It was an embarrassing issue the company “corrected”, two years later, by removing the photo of a gorilla as a point of reference. Amazon, too, shifted gears after it invested in the development of an AI-based hiring tool after it was shown to favor men.

“Similarly to our brains, several ML algorithms are postulated on the principle of learning patterns from many examples. In humans, this is what our brain absorbs from life experiences and become part of what our unconscious brain leverages to help us make many mundane decisions with low cognitive abilities,” Sala says. “However, even in humans, if this learning process is left unchallenged, it can lead us to biased and limited perspectives which have practical consequences in the quality of our decision-making abilities.”

The road to accurate representation

man with prosthetic leg takes phone call while walking down street

As companies and governments invest more in advanced facial recognition technology, it is increasingly critical to develop methods to bypass any unconscious biases established in algorithms. 

Sala says there are new studies being conducted and published on the question of fairness, which offer new opportunities to better understand the lopsidedness of source material and representation when algorithms are being created. “The concept of fairness is well-grounded in legal systems as it addresses anti-discrimination laws,” and can, therefore, be argued in court as needed.

The scientific community is increasingly aware of this shortcoming, as are the media and governments.

“Today we are focusing on discussing the problem instead of initiating design policies and procedures to improve the status quo via legislation and regulation,” Sala says. “Unfortunately, it is not clear how long it will take our society to become truly inclusive, unbiased and fair. The good thing, in a way, is that we’ve already seen how easy it is for AI/ML to be biased… so now it’s up to us in the AI/ML community to take a broader inclusion perspective from the very early stages of our design software. The easiest way to do that is by leveraging diverse teams.”

Diversity needs to consider gender, ethnicity, LGBTQ+, disabilities, education and, most importantly, as some say: smart people who may not think exactly like you do.

Diversity isn’t just black and white, male or female.

It’s all that and more as society shifts.

“Tech companies and governments should jointly embrace the challenge of eliminating discrimination based on unconsidered minorities and strive for a world of true equal opportunities,” Sala says. “It’s absolutely critical to focus more attention on how to build diverse teams. Diversity needs to consider gender, ethnicity, LGBTQ+, disabilities, education and, most importantly, as some say: smart people who may not think exactly like you do.”

Sala argues that developers also need to “consider bias and diversity of mindsets from the get-go… focus on the fundamental understanding of the limits of these algorithms and to design theoretically provable alternatives. Only by deeply understanding the mathematics and the theory behind these models can we hope to confine and mitigate the unintended consequences that we are already starting to experience in our daily lives.”

About Futurithmic

It is our mission to explore the implications of emerging technologies, seeking answers to next-level questions about how they will affect society, business, politics and the environment of tomorrow.

We aim to inform and inspire through thoughtful research, responsible reporting, and clear, unbiased writing, and to create a platform for a diverse group of innovators to bring multiple perspectives.

Futurithmic is building the media that connects the conversation.

You might also enjoy
stephanie harvey esports
Fighting cyberbullying in esports is no one-woman job