Businesses can’t afford to ignore AI’s diversity problem

two friends taking a selfie with a smart phone

Facial recognition tools have significant error rates that differ by race. Google image search results of “CEO” show only 11 percent of top images with women. An AI hiring tool from Amazon “learned” gender bias against women and favored male candidates. 

We know diversity bias is rampant in artificial intelligence. But decisions made based on prejudiced AI systems aren’t just an ethical dilemma; they’re a financial one. The more unbiased a system, the more likely it is to maximize profits, make better hiring or selling recommendations and provide accurate risk predictions. Unfortunately, creating and developing AIs without bias has a unique challenges. 

Where AI goes wrong: the design phase 

Bias errors occur when an algorithm produces results that are systematically prejudiced due to incorrect assumptions in the machine learning process. According to Morena Ferrario, who runs a cognitive collaboration hub near Milan, Italy, bias removal is, therefore, primarily a technological problem. 

Bias removal must be addressed in the design phase, through the data used and its quality and completeness.

Morena Ferrario

In the design phase, there are three design entry points for how bias in AI happens. The first is how scientists and engineers frame the “problem” itself. When creating a deep-learning model, computer scientists need to decide what they want to achieve. Nebulous concepts like “hireability,” “profitability,” “creditworthiness,” and other outcomes need to be translated into something a computer can predict. 

Many of these decisions get lost in translation, which can backfire both ethically and financially. For example, an AI may decide to maximize profit margins by giving out subprime loans in lieu of maximizing the number of loans that are likely to get repaid. 

Another issue? Many biased AI decisions stem directly from the datasets the algorithms have been fed. Either the data collected is not representative of reality, or it reflects existing prejudices. The second case is what happened when Amazon discovered that its recruiting tool overlooked female candidates. Because it was trained on historical hiring decisions that had favored men over women — the result of systemic bias — it learned to do the same, associating superior job candidates with “maleness.” 

Finally, bias can also be introduced in the data preparation phase, where you select the attributes you want the algorithm to consider or ignore. An attribute could be gender, education level or years of experience. The attributes you consider significantly influence prediction accuracy. When considering how to address late payments, managers for a credit card company might initially build a model with data that includes zip codes, type of car driven, or first names — without recognizing these data points are correlated with race or gender. 

The role of diverse AI teams

an ethnically diverse team of colleagues working on an analytics  report for a company

Removing bias in the design phase is critical so the AI system can equalize itself. But as Ferrario points out, AI systems aren’t designed in a vacuum. There’s human bias to consider.

“Aside from the data used in the design phase, there’s also a mindset issue, which can be addressed by building inclusive teams with diverse backgrounds to integrate unique perspectives,” she says. 

Without diverse teams driving the development of AI tools, bias through these three entry points are more likely to occur. For Ferrario, removing bias in systems and workforces have the same purpose: they both make good business sense. After all, companies with more diverse teams have significantly higher revenues than companies that are below-average in diversity. 

“A successful team or an AI solution is the result of a balanced mix of diverse competencies. Failure may come from one expertise prevailing over the others,” she explains. “For example, if the data science theoretical profile is prevailing on the DevOps or system integration one, that would mean we have a great idea — but it’s not deployable.” 

The development of AI tools should be complemented by corporate education programs and hiring policies.

A focus on bias removal in the design phase, coupled with building teams that are diverse in gender, age, culture, economic status, and abilities, work together to eliminate bias. The development of AI tools should therefore be complemented by corporate education programs and hiring policies. A range of diverse competencies in a team can ensure that no one perspective is given precedence over another. 

AI bias affects ROI 

Biased AI systems are likely to become a widespread problem. But if AI is the cause, could it also be the solution? Ferrario thinks so. Her research and development team is currently using diversity as a primary method to come up with bespoke AI solutions for customers’ networks. Most recently, her Nokia AVA Analytics team worked on an AI for Hutchison 3 Indonesia, where they saw a 17 percent increase in spectral efficiency and were able to reduce manual efforts by 60 percent, without extra sites or hardware.

“Bias in data, bias in systems, and the subconscious bias of people can all limit or hinder the return on investments we make in AI,” explains Ferrario. 

IBM’s research teams are also working on automated bias-detection algorithms, which are trained to mimic human anti-bias processes to mitigate against them. Of course, in order for these algorithms to be successful, they too will have to mitigate bias entry points at various stages of the design phase — and within the team itself. 

diverese team working in an office

Outside experts like Ferrario are usually needed to challenge past and current practices, as organizations are forced to question what preconceptions could currently exist in their processes and actively hunt for how those biases might manifest themselves in data. 

Meanwhile, data scientists, AI researchers and coders must work together to determine which sets should be ignored or de-emphasized, ensuring the data is representative of different races, genders, backgrounds, cultures, skill sets, etc., who could be adversely affected. Once potential biases are identified, companies can block them by eliminating problematic data or removing specific components of the input data set. 

The “democratization of AI” undoubtedly has the potential to do a lot of good, by putting intelligent, self-learning software in the hands of us all. But the removal of bias in the system is crucial, for fairness and for maximizing returns. 

About Futurithmic

It is our mission to explore the implications of emerging technologies, seeking answers to next-level questions about how they will affect society, business, politics and the environment of tomorrow.

We aim to inform and inspire through thoughtful research, responsible reporting, and clear, unbiased writing, and to create a platform for a diverse group of innovators to bring multiple perspectives.

Futurithmic is building the media that connects the conversation.

You might also enjoy
scaling innovation header
Scaling innovation in the 5G era