AI Bias - What Is It and How to Avoid It?

AI Bias - What Is It and How to Avoid It?

Zoe Larkin

·

Content Queen

November 16, 2022
Divider

Algorithms are not neutral when weighing people, events, or things differently for various purposes. Therefore, we must understand these biases so that we can develop solutions to create unprejudiced AI systems. This article will discuss what AI bias is, the types of AI bias, examples, and how to reduce the risk of AI bias.

Let’s begin with an AI bias definition.

What is AI bias?

Machine Learning bias, also known as algorithm bias or Artificial Intelligence bias, refers to the tendency of algorithms to reflect human biases. It is a phenomenon that arises when an algorithm delivers systematically biased results as a consequence of erroneous assumptions of the Machine Learning process. In today’s climate of increasing representation and diversity, this becomes even more problematic because algorithms could be reinforcing biases.

For example, a facial recognition algorithm could be trained to recognize a white person more easily than a black person because this type of data has been used in training more often. This can negatively affect people from minority groups, as discrimination hinders equal opportunity and perpetuates oppression. The problem is that these biases are not intentional, and it’s difficult to know about them until they’ve been programmed into the software.

3 AI Bias examples

Let’s now take a look at a few AI bias examples that we can come across in real life.

1. Racism in the American healthcare system

Technology should help decrease health inequalities rather than aggravate them at a time when the nation is struggling with systematic prejudice. AI systems trained on non-representative data in healthcare typically perform poorly for underrepresented populations.

In 2019, researchers found that an algorithm used in US hospitals to predict which patients will require additional medical care favored white patients over black patients by a considerable margin. Because the expense of healthcare emphasizes an individual's healthcare needs, the algorithm considered the patients’ past healthcare expenditures.

This number was significantly related to race. Black individuals with similar diseases spent less on healthcare than white patients with similar issues. Researchers and a health services company Optum worked together to reduce bias by 80%. However, if the AI had not been questioned, AI prejudice would have continued to discriminate against black individuals.

2. Depicting CEOs as purely male

Women make up 27 percent of CEOs in the United States. However, according to a 2015 study, only 11 percent of the individuals who appeared in a Google pictures search for the term "CEO" were women. A few months later, Anupam Datta conducted independent research at Carnegie Mellon University in Pittsburgh and revealed that Google's online advertising system displayed high-paying positions to males much more often than women.

Google responded to this discovery by pointing out that advertisers can specify to which individuals and websites the search engine should display their ads. Gender is one of the specifications that companies can set.

Although it has been suggested that Google's algorithm could have determined that men are more suited to executive positions on its own, Datta and his colleagues believe that it could have done so based on user behavior. For example, if the only people who see and click on advertisements for high-paying jobs are men, the algorithm will learn to show those advertisements only to men.

3. Amazon’s hiring algorithm

Automation has played a critical role in Amazon's e-commerce supremacy, whether used in warehouses or to make price choices. Some of the individuals who spoke with the business claimed the company's experimental recruiting tool utilized Artificial Intelligence to assign job applicants ratings ranging from one to five stars – similar to how customers evaluate goods on Amazon. When the business discovered its new system was not evaluating applicants for software development jobs and other technical positions in a gender-neutral manner, specifically because it was biased towards women, the necessary changes were made.

As a result of analyzing resumes for a decade, Amazon's computer models can spot similarities in candidates' applications. Most were from males, reflecting the industry's male dominance. Amazon's algorithm learned that male applicants were preferred. So, it penalized resumes that indicated that the applicant was female. It also demoted applications of those who attended one of two all-female institutions.

Amazon changed the programs to be neutral to these keywords. But that doesn't prevent other biases from occurring. Recruiters used the tool's suggestions to find new employees but never depended entirely on those rankings. Amazon dissolved the effort in 2017 after management lost faith in the initiative.

Hiring practices can expose biases in artificial intelligence

How AI bias reflects society's biases

Unfortunately, AI is not safe from the tendencies of human prejudice. It can assist humans in making more impartial decisions, but only if we work diligently to ensure fairness in AI systems. The underlying data, rather than the method itself, is often the cause of AI bias. With that in mind, here are a few interesting findings that we’ve seen in a McKinsey study on tackling AI prejudice:

  • Models may be trained on data from human choices or data from social or historical disparities. For example, word embeddings (a set of Natural Language Processing techniques) trained on news articles may reflect social gender biases.
  • Data may be biased by the way they are gathered or chosen for use. For instance, in criminal justice AI models, oversampling particular areas may result in more data for crime in that area, which could lead to more enforcement.
  • User-generated data may lead to a bias feedback loop. It was found that more searches containing the term “arrest” came up when African-American-identifying names were searched than for white-identifying names. With or without “arrest,” researchers speculated that the algorithm showed this result more frequently because users may have clicked on various versions more often for different searches.
  • A Machine Learning system may potentially detect statistical connections that are considered socially inappropriate or unlawful. For instance, a mortgage lending model might determine that older people have a greater probability of defaulting and lower their creditworthiness. If the model draws up this conclusion solely on the basis of age, then we might be looking at illegal age discrimination.

Another example worth mentioning here is an issue with the Apple credit card. Apple Card accepted David Heinemeier Hansson's application and granted him a credit limit 20 times that of his wife, Jamie Heinemeier Hansson. Janet Hill, wife of Apple co-founder Steve Wozniak, was given a credit limit only amounting to 10 percent of her husband’s. Of course, it's inappropriate and illegal to judge creditworthiness on gender.

With that in mind, here comes the big question:

What can we do about the biases in AI?

Here are some of the proposed solutions:

Testing algorithms in a real-life setting

Let’s take job applicants, for one. Your AI-powered solution might not be trustworthy if the data your machine learning system is trained on comes from a specific group of job seekers. While this might not be an issue if you apply AI to similar applicants, the issue occurs when using it to a different group of candidates who weren’t represented in your data set. In such a scenario, you essentially ask the algorithm to apply the prejudices it learned on the first candidates to a set of individuals where the assumptions might be incorrect.

To prevent this from happening and to identify and solve these issues, you should test the algorithm in a manner comparable to how you would utilize it in the real world.

Accounting for so-called counterfactual fairness

Also, we must keep in mind that the definition of "fairness" and how it is computed are both up for discussion. It may also vary due to external causes, which implies that the AI must also account for these changes.

Researchers have also made work on a broad range of methods that guarantee AI systems can satisfy them, such as pre-processing data, modifying the system's choices after the fact, or integrating fairness definitions into the training process itself. “Counterfactual fairness” is a potential approach to this that guarantees a model's choices are the same in a counterfactual world where sensitive characteristics like race, gender, or sexual orientation have been altered.

Consider Human-in-the-Loop systems

The goal of Human-in-the-Loop technology is to do what neither a human being nor a computer can accomplish on their own. When a machine cannot solve an issue, humans must interfere and solve the problem for them. As a consequence of this procedure, a continuous feedback loop is created.

With continuous feedback, the system learns and improves its performance with each subsequent run. As a result, human-in-the-loop leads to more accurate rare datasets and improved safety and precision.

Change the way people are educated about science and technology

In a piece for the New York Times, Craig S. Smith expresses his opinion that it also takes a major change in the way people are educated about technology and science. He argues that we need to reform science and technology education. Science is currently taught from an objective viewpoint. There needs to be more multidisciplinary collaboration and rethinking of education.

He states that some issues should be addressed and agreed upon globally. Other issues should be addressed locally. Like the FDA, we need principles and standards, regulating bodies, and people voting on things and algorithms being verified. Making a more diversified data collection isn't going to solve problems. That's just one factor.

Will these changes solve everything?

Changes such as these would be beneficial, but some problems may require more than technological answers and need a multidisciplinary approach, with views from ethicists, social scientists, and other humanities scholars contributing.

Moreover, these changes alone may not help out in situations such as determining whether a system is fair enough to be released and in deciding if completely automated decision-making should be permitted at all in certain circumstances.

Rethinking how science and technology are taught from the ground up is key

Will AI ever be unbiased?

The short answer? Yes and no. It is possible, but it’s unlikely that an entirely impartial AI will ever exist. The reason for this is because it’s unlikely that an entirely impartial human mind will ever exist. An Artificial Intelligence system is only as good as the quality of the data it receives as input. Suppose you can clear your training dataset of conscious and unconscious preconceptions about race, gender, and other ideological notions. In that case, you will be able to create an artificial intelligence system that makes data-driven judgments that are impartial.

However, in the actual world, we know this is unlikely. AI is determined by the data it’s given and learns from. Humans are the ones who generate the data that AI uses. There are many human prejudices, and the continuous discovery of new biases increases the overall number of biases regularly. As a result, it is conceivable that an entirely impartial human mind, as well as an AI system, will never be achieved. After all, people are the ones who generate the skewed data, and humans and human-made algorithms are the ones who verify the data to detect and correct biases.

However, we can combat AI bias by testing data and algorithms and using best practices to gather data, use data, and create AI algorithms.

Summary

As AI becomes more advanced, it will play a significant role in the decisions that we make. For example, AI algorithms are used for medical information and policy changes that have significant impacts on the lives of people. For this reason, it is essential to examine how biases can influence AI and what can be done about it.

This article proposes a few possible solutions, such as testing algorithms in real-life settings, accounting for counterfactual fairness, considering human-in-the-loop systems, and changing how people educate about science and technology. However, these are not solutions that will ultimately solve the problems of AI bias and may require a multidisciplinary approach. The best way to fight AI bias is to evaluate data and algorithms and follow best practices while collecting, utilizing, and creating AI algorithms.

Now that you're here

Levity is a tool that allows you to train AI models on images, documents, and text data. You can rebuild manual workflows and connect everything to your existing systems without writing a single line of code.‍If you liked this blog post, you'll love Levity.

Sign up

Now that you're here

Levity is a tool that allows you to train AI models on images, documents, and text data. You can rebuild manual workflows and connect everything to your existing systems without writing a single line of code.‍If you liked this blog post, you'll love Levity.

Sign up

Stay inspired

Sign up and get thoughtfully curated content delivered to your inbox.

Thanks!