AI Bias: The Big Six
Over the decades, mathematicians and statisticians have attempted to characterize fairness with myriad definitions. At least 21 definitions attempt to capture various ways biases can manifest — and ways to rectify them. What’s important in AI is that these biases can affect data and the algorithmic processes that learn from them. In pursuit of Ethical AI, let’s take a look at six of the most important sources of biases. Here goes!
1. Societal Bias
Contemporary or historically discriminatory practices performed as typical courses of action by a community’s citizens and organizations constitute societal bias. An example: AI systems’ tendency to show prejudice and discrimination against a particular group of people based on societally normalized bias.
2. Feature Bias
This type of bias is based on a variable or individual characteristic examined in a given event or statistical trial. We call this a feature, and it may contain or encourage biases. Machine learning and pattern recognition are used to represent explanatory variables – sometimes known as independent variables or regressors.
3. Sampling Bias
Algorithms are trained on data containing attributes that represent an entire population, ultimately making statistical inferences about that population. But there isn’t an ML algorithm or AI system capable of being trained on the whole universe of data. Thus, data scientists must sample the data using only a subset, potentially creating sampling bias.
4. Representation Bias
This kind of bias comes from choosing a subset of data that is neither large enough nor representative. As a result, cases involving the underrepresentation or overrepresentation of groups may occur.
5. Omitted-Variable Bias
Neural networks require data for training, but because some training data include sensitive and personal information (classified as protected attributes like race, gender, and ethnicity), algorithm developers can omit such variables to avoid discrimination. But because of redundant encodings, omitting such variable may only result in proxy discrimination.
6. Evaluation Bias
When an AI model uses inappropriate or disproportional benchmarks for its prediction, evaluation bias occurs. One example: using individuals’ credit history for hiring decisions risks evaluation bias because the correlation between credit history and employability hasn’t been determined.
Well-designed AI systems can be designed in a way that helps reduce the impact of biased human decisions. Because AI can amplify these biases and worsen the situation, making bias mitigation an integral part of algorithm decisions is essential. That’s on all of us in the AI space and a challenge we should all accept to promote fairness in machine learning.
Interested in learning more about how to develop ethical AI? Our firm can help you put best practices in place to better serve your customers? Contact us! Quickly develop ethical AI that is explainable, equitable, and reliable with help from our complete AI IaaS. Sign up for FREE diagnostics.