Create, deploy, and maintain analytic applications that engage users and drive revenue. See a Logi demo

Development Tips

Biased Machine-Learning Models: How to Recognize and Avoid Them

By Michelle Gardner | October 11, 2018
Share on LinkedIn Tweet about this on Twitter Share on Facebook

Machine learning and artificial intelligence (AI) are two of the most-hyped innovations in technology—but neither of them would exist without models. Models are decision frameworks that use historical data and machine learning (or augmented analytics) to predict outcomes. Data goes in about what’s happened in the past, and out comes a prediction about what will happen in the future. Even more impressive, a model can learn from its own mistakes to improve accuracy.

>> Related: Predictive Analytics 101 <<

Machine-learning algorithms can be wonderful—but they’re still susceptible to bias. In fact, just recently, Amazon had to scrap a recruiting tool that was biased against women. The model behind the tool used data from the past 10 years to detect patterns in applicant resumes and predict which applicants would be best for new roles. Based on the data, the algorithm assigned new job candidates a score from one to five to help hiring managers find the best applicants.

The problem? Most of the resumes from the past decade came from men—and, therefore, most new hires in that timeframe were men. The model learned that male candidates were preferable to female candidates, penalizing anyone with the word “women’s” on their resume (for example, someone with a degree in Women’s Studies).

Amazon’s failed project highlights a hurdle for any machine-learning project that includes demographic data (like age, gender, and race). Algorithms built to predict outcomes for employee hiring, bank loan approvals, home rental applications, credit card approvals, and more are all subject to bias.

Questions these algorithms may be trying to answer include:

  • What percentage of this type of person get approved/hired?
  • How many approved applicants are women?
  • What percentage of hired employees are under age 35?
  • What percentage of approved applicants are African American?

As the Amazon example demonstrates, the data you start with will inform what your model learns. If 85 percent of approved loan applicants have historically been men, the model will learn to prefer men. If you use this biased data to create a predictive analytics model, you risk making the process biased in all your future decisions.

How Do You Avoid Biased Data?

Logi’s Director of Predictive Analytics, Sriram Parthasarathy, recommends two techniques for handling biased data in cases where one class is over-represented (as in the Amazon example): under-sampling and over-sampling. Both these techniques involve tweaking the way we choose the rows for the training data.

To illustrate how they work, Sriram uses the following example: 98 percent of the population does not have cancer (making this an over-represented class), while 2 percent of the population has cancer (making this the minority or under-represented class).

He explains: “One common technique to use when training is to make use of more rows with the answer ‘Yes’ (under-represented answer) and less rows with the answer ‘No’ (over-represented answer). When we take more samples from the under-represented class (has cancer) for training, that is called over-sampling. When you take less samples from the over-represented class (does not have cancer), that is called under-sampling.”

In short, under-sampling is useful when you have a huge amount of data, while over-sampling is preferred when your data is limited. Sriram also recommends hybrid methods that use the over sampling-technique but also create synthetic samples from the under-represented class instead of creating copies. This is called Synthetic Minority Over-Sampling Technique (SMOTE).

As a data scientist, one must look into the biased data and alert others to investigate if needed. Note that even data that looks biased could still end up producing the right outcomes—but it’s always good to investigate. Doing so will help your company avoid biased decisions and even potential legal problems stemming from a biased machine-learning model.

See how Logi can help with your predictive analytics needs. Sign up for a free demo of Logi Predict today >

About the Author

Michelle Gardner is the Content Marketing Manager at Logi Analytics. She has over a decade of experience writing and editing content, with a specialty in software and technology.

Subscribe to the latest articles, videos, and webinars from Logi.