Skip to content

Data Bias Is Becoming A Massive Problem

2017 November 22
tags:
by Greg Satell

Nobody sets out to be biased, but it’s harder to avoid than you would think. Wikipedia lists over 100 documented biases from authority bias and confirmation bias to the Semmelweis effect, we have an enormous tendency to let things other than the facts to affect our judgments. We all, as much as we hate to admit it, are vulnerable.

Machines, even virtual ones, have biases too. They are designed, necessarily, to favor some kinds of data over others. Unfortunately, we rarely question the judgments of mathematical models and, in many cases, their biases can pervade and distort operational reality, creating unintended consequences that are hard to undo.

What makes data bias so damaging is that we are mostly unaware of it. We assume that data and analytics are objective, but that’s almost never the case. Our machines are, for better or worse, extensions of ourselves and inherit our subjective judgments. As data and analytics become a core component of our decision making, we need to be far more careful.

Overfitting The Past

Imagine you’re running a business that hires 100 people a year and you want to build a predictive model that would tell you what colleges you should focus your recruiting efforts on. A seemingly reasonable approach would be to examine where you’ve recruited people in the past and how they performed. Then you could focus your efforts on the best performing schools.

On the surface, that would seem to make sense, but if you take a closer look it is inherently flawed. First of all, 100 students spread across perhaps a dozen colleges is far from statistically significant. Second. It’s not hard to see how a one or two standouts or dullards from a particular school would skew the results massively.

A related problem is what statisticians call overfitting, which basically means that because there is an element of bias in every data set, the more specifically we tailor a predictive model to the past the less likely it is to reflect the future. In other words, the more detailed you make your model to fit the data, the worse the predictions are likely to get.

That may seem counterintuitive, and it is, which is why overfitting is so common. People who sell predictive software love to be able to say things like, “our model has been proven to be 99.8% accurate,” even if that is often an indication that their product is actually less reliable than one that is, say, 80% accurate, but far simpler and more robust.

Bias In The Learning Corpus

With humans, we are careful to construct learning environments thoughtfully. We design curriculums, carefully selecting materials, instructors and students to try and get the right mix of information and social dynamics. We go to all this trouble because we understand that the environment we create greatly influences the learning experience.

Machines also have a learning environment called a “corpus.” If, for example, you want to teach an algorithm to recognize cats, you expose it to thousands of pictures of cats. In time, it figures out how to tell the difference between, say, a cat and a dog. Much like with human beings, it is through learning from these experiences that algorithms become useful.

However, the process can go horribly awry, as in the case of Microsoft’s Tay, a Twitter bot that the company unleashed on the microblogging platform. In under a day, Tay went from being friendly and casual  (“humans are super cool”) to downright scary, (“Hitler was right and I hate Jews”). It was profoundly disturbing.

Bias in the learning corpus is far more common than we often realize. Do an image search for word “Grandma” and you will get almost exclusively white faces. The same goes for prestigious titles, like doctor, lawyer and scientist. When we query machines, all too often we find our own biases baked in.

Perpetuating Bias

For over a century, the intelligence quotient (IQ) has been the standard method to test intelligence and has been shown to be strongly correlated with educational, professional and economic outcomes. However, a strong correlation is not a perfect correlation and researchers have consistently found a number of sources of bias in the testing that can affect scores.

The flaws of IQ tests are well known and educators are generally aware of them, so are well placed to mitigate the problems that bias creates, but still test results help shape the educational experience. Students that test well are placed in different classrooms, get different curriculums and are treated differently by teachers.

As Cathy O’Neil explains in Weapons of Math Destruction today algorithms often determine what college we attend, if we get hired for a job and even who goes to prison and for how long. Unlike IQ tests, these mathematical models are rarely questioned. They just show up on somebody’s computer screen and fates are determined.

Once you get on the wrong side of an algorithm, your life immediately becomes more difficult. Unable to get into a good school or to get a job, you earn less money and live in a worse neighborhood. Those facts get fed into new algorithms and your situation degrades even further. Each step of your humiliating descent is documented, measured and evaluated.

Correcting For Bias

In Thinking Fast, Thinking Slow, Daniel Kahneman explains how humans can overcome their biases. He describes our brains as two systems. The first is quick to judgement, but the second is slower and weighs data more carefully. With training and experience, we can learn to disengage our system 1 and replace it with system 2.

Yet we rarely do the same with machines. We don’t ask our algorithms to “sleep on it” or to get a second opinion. Often, we don’t even stop to question their judgments. If a human told us to make a decision in a certain way, we would want to know why, but when a mathematical model does it, we usually just accept it and move on.

We shouldn’t. Our data systems are designed by people and inherit many of our human flaws. We need to hold them to higher standards. Good systems, like good people, need to be transparent and accountable. We should know what information is being used, how factors are weighted and how conclusions are arrived at.

It’s been a long time since we simply accepted “the will of the gods” as an acceptable explanation for our fates. Now that those gods have been replaced by algorithms in black boxes, we need to continue to question their objectivity. Anything less is not only bad practice, it’s immoral.

– Greg

 

An earlier version of this article first appeared in Inc.com

6 Responses leave one →
  1. Eduardo Muniz permalink
    November 25, 2017

    So true! Thanks for the article. To address it people need use their critical thinking process capability to make the right questions and process the answers rightly..

  2. November 26, 2017

    Yes, I think that’s true.

    – Greg

  3. November 26, 2017

    Great topic Greg! In all ways this is true, and as you note made way worse by often not questioning the methods or the data itself. As you say, there is bias built into the model, and of course statistics themselves can be easily warped into confirmation bias, but what concerns me most are the humans. Far too often when receiving data or information there is no deep dive, no analysis, no attempt to really understand what the data is saying to us. Without that, there is often complacency so when the wrong decisions are made (which happens all the time) there is little realization of what and how to adjust. At least if one dives into the data they realize there are a lot of outcomes even if one seems preferable. The analysis alone therefore provides both a check on things going wrong and alternate paths to get back on course. Without that, we are often deer in the headlight wondering what went south on us and frozen in our decision making which is why deer often get hit by that car. Thanks, let’s hope folk go beyond that summary slide before concluding.

  4. November 27, 2017

    Thanks Robert. I think you’re right about complacency and, in that sense, the data bias problem is similar to fake news. We are simply too gullible when we see information that reinforces our pre-existing beliefs.

    However, I do think that more can be done on the supply side. There are far too many black boxes, not enough transparency and testing. There are also no standards for algorithms beyond what someone who purchases software may be willing to believe.

    – Greg

  5. November 29, 2017

    In (reasonably decent) practice, this is rarely the case that there is no second opinion. Typically output from machine models is an input to well defined algorithms or operators. Specifically in the case where scope of error in each individual decision is not there.

    The cases, where average case matters (say recommendations, etc) i believe what you say holds. However, I would argue does it really matter in that case? As the problem becomes significant (from revenue point of view), it goes to the first category.

  6. November 29, 2017

    I think the issue is how a model is verified. For example, if you are designing a model for who should be hired, who should be paroled from prison or who gets into a college, you really don’t have much incentive to check for bias, but your system affects people nonetheless. At the same time, the people affected by the system do not have any way of checking how they are being evaluated.

    A classic example is someone who lives in a bad neighborhood. If their zip code is factored into a models evaluation of them, they will find it harder to get a job and get credit, which of course will make it harder for them to improve their situation.

    – Greg

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS