top of page

Why Algorithmic Bias in AI Exists: Examples and Explanations

Updated: Mar 17, 2022

Algorithmic bias exists partly because algorithms are a mystery to many, even for businesses using them. Find out more reasons why this bias exists with examples.

Most people don't know exactly how algorithms work. However, what's more interesting is that many organizations using algorithms also have no clue how they work. Even those training algorithms sometimes don't exactly understand how they work either.


The complex nature of algorithms is part of why bias creeps into algorithms in artificial intelligence (AI), but there's more to it.


The goal of this post is to help anyone working with algorithms have a better understanding of why algorithmic bias exists and offer insight into ways to minimize the occurrence of bias in your algorithms to gain consumer trust.


Examples of algorithmic bias in the real world


There are various reported examples of algorithmic bias over the years across every industry. Here are some of the more popular ones.


Healthcare


Black patients are generally given a healthier kidney score compared to white patients for equivalent blood samples, which negatively affects their eligibility for a kidney transplant.


Unfortunately, although awareness has been brought to this issue, not much has changed and organizations continue to use algorithms that account for race.


Click here for a list of various algorithms in healthcare that take race into consideration.


Safety and legal

There has been lots of controversy surrounding the use of AI in daily lives, but you may be interacting with algorithmic biases every day — without knowing it.


Self-driving cars rely on object detection algorithms to avoid hitting pedestrians or anything dangerous. A study found a potential risk that dark-skinned pedestrians have a higher chance of being run over as a result of self-driving cars not detecting them. This, of course, could have serious legal and safety implications.


Financial industry


Algorithms are commonly used to approve or reject credit applications and to determine credit limits.


One algorithm, in particular, got a lot of press in 2019. Apple co-founder Steve Wozniak and his wife, who share accounts and assets, each applied for an Apple credit card. Wozniak got a credit limit that was 10x higher than his wife and he tweeted about it. It caused such an outrage on Twitter that it led to an investigation on the issuing bank Goldman Sachs.


Hiring process


Unfortunately, many organizations use unfair algorithms to filter out applications. Perhaps the most notorious example of algorithmic bias in the hiring process is Amazon's recruiting AI algorithm that was built using existing hiring data as the training set.


The algorithm apparently filtered out women who had anything on their resume that was indicative of them being a woman. Amazon said the algorithm was shut off and never actually used, but this example provides key insights into why algorithms are biased.


Why algorithmic bias in AI exists

Algorithmic biases exist in many parts of our daily lives, and most of them go unnoticed by the general public. The reality is that algorithmic biases do have some purpose.


Now that we’ve covered a few common examples, hopefully you have a better understanding of where algorithmic biases show up day-to-day. But why do they exist in the first place?


The reality is that they exist for a variety of reasons—some purposeful, some not. In this next section, let’s explore the commonly cited reasons.


Algorithms train on biased data


A commonly cited reason that algorithmic bias exists has to do with training data that is biased.


Whoever is training the data has to do their due diligence to diversify the data that trains AI models. For example, if you train a facial recognition system with mostly white people, then the likelihood of the system recognizing an Asian, Hispanic, or Black is significantly lower.


However, it’s not the obvious classifications that lead to most cases of algorithmic bias. You might use data that is mindful of ethnicity, age, and gender bias, however, it’s the less obvious classifications that cause problems. For example, most won’t associate zip code with being inherently racist, but zip code can be a strong predictor of race.


The New York State Department of Financial Services (NYSDFS) stated in this report at the conclusion of the investigation against Goldman Sachs: “The data used by creditors in developing and testing a model can perpetuate unintended biased outcomes.”

Biased algorithms are commonly unintentional, but that’s not an excuse.


There’s no transparency because source codes are “proprietary”


Programmers might mistakenly or intentionally build bias into the algorithms they create. Unfortunately, these creators get to hide their bias behind these algorithms under the guise of being data-driven, mathematical, or scientific. Currently, there’s very little accountability.


Cathy O'Neil, the author of Weapons of Math Destruction, argues that although some algorithms are marketed as fair and objective, it doesn’t mean that it actually is. And because algorithm source codes legally have a right to privacy, businesses just have to trust that programmers have done their due diligence.


NYSDFS’s ruling against Goldman Sachs found no intentional discrimination, but the larger debate isn’t whether companies like Goldman Sachs do it intentionally or not. Even though they were found innocent, consumer trust has still been affected due to a lack of transparency into their decision-making process.


Trying to replicate a historically biased decision-making process


Algorithms are used to mimic human decision makers and their decision-making processes.


So hypothetically, if previous hiring managers favored white male engineers, an algorithm trained on past hiring data would magnify this bias. Although it would generally make sense to analyze past decisions within a company to automate the decision-making process, it’s not always the best idea. Unfortunately, this isn’t always so obvious and can sometimes be hard to detect and prove.


Predictive policing is a notorious example of potential algorithmic bias as a result of historically biased decisions. The algorithms behind predictive policing use historical crime data to choose locations where to send police and to identify individuals with a high likelihood of committing crimes. These algorithms have been under a lot of heat over the years, especially in light of recent events.


There isn’t enough legislation yet


AI is a relatively new concept compared to fiat money, for example. In a similar fashion to cryptocurrency, only now are more regulations starting to address these emerging technologies. There’s a long way to go.


AI has grown fast, and regulations are trailing behind. A possible reason for the lack of regulations is a lack of education among policymakers. In a general sense, policymakers don't yet understand AI algorithms as well as those building and training algorithms.


Solutions that are being explored to reduce algorithmic bias


Algorithmic bias in AI is being tackled from multiple angles. Here are some solutions that are being discussed besides the obvious one of making sure your algorithm is trained on a high-quality dataset.


Transparency


Transparency on factors affecting algorithms creates accountability and reduces confusion. Unfortunately, for now, businesses aren’t under the law to always make their algorithms transparent. Thankfully, despite this, organizations like Twitter are leading the way towards transparency in algorithms.


NYSDFS found Goldman Sachs clear of any wrongdoing, however, they had a section in their report about a lack of transparency on Goldman Sachs’ part. NYSDFS suggests that “transparency as to account holders’ credit terms supports consumer trust.”


Unfortunately, the law only requires explanations when credit is denied. The NYSDFS’s suggestion, in other words, explains that even though explaining credit limit decisions isn’t required by law yet, doing so would gain consumer trust.


Regulations


A promising White House Memo was released at the end of 2020 addressing AI. Although briefly, it addresses the occurrence of bias. This is a good indication that more regulations surrounding algorithmic bias may not be far off.


The hope is that businesses involved in the creation of algorithms become more mindful of bias. It's possible ethics are not enough of a motivator. Still, the fear of costly penalties in the future might propel more businesses into really considering ways to mitigate bias in the algorithms they use.


Figuring out ways to test for fairness in the same way accuracy is tested


Machine learning models wouldn't be released into the real world if it kept predicting oranges to be apples. In the same way the concept of accuracy is tested for, ways to test algorithmic fairness are being proposed. Ideally, these tests will be as important as testing for accuracy before an algorithm is released and used on the public.


Resources to consider when minimizing algorithmic bias


Many business ideas and resources have been created to address algorithmic bias. Among these are:

Final thoughts


Biases in algorithms are difficult to pinpoint if they are present because all algorithmic systems are complex. Even though algorithmic bias isn't intentional most of the time, simply trying to have an unbiased training dataset does not guarantee your algorithm will be free of harmful bias. And in fact, some bias will always exist, but the goal is to dramatically reduce the severity of bias within algorithms to promote fairness and equal opportunity.


Businesses that choose to take algorithmic bias lightly are at risk of damaging consumer trust. The organizations also risk investing many resources in building or buying algorithms that will expose them to lawsuits or fees once regulations become more prevalent.


Taking algorithmic bias seriously is a smart business consideration. If your company has been considering AI solutions for some time, but doesn’t know where to start; consider reaching out to AI experts with a full view of the industry. Companies like AI Partnerships create affiliate programs with a variety of AI startups and businesses to address all kinds of business needs.


For a refresher on training algorithms, check out our previous post.


211 views0 comments
bottom of page