Be Aware of the Risk of AI Bias
August 15, 2025
[Your Name Here] in AI, Artificial Intelligence, Consumer Issues, No Jitter

Asking, “How can this possibly go wrong?” should be an essential exercise required by managers deploying AI tools.
How and why might you be concerned about identifying and managing bias in an AI-driven workspace? Sounds pretty dry, but the fact is that it’s a fascinating, thought-provoking, and an essential exercise absolutely required by managers deploying AI tools. There are a number of key vulnerabilities that those who have adopted AI tools in the CX space—or plan to--should be well-aware.

Since AI is being deployed heavily in the CX space, a most important consideration must be whether or not the outcomes—and the decisions based upon that outcome--are discriminatory. Digital discrimination is not necessarily illegal (although it can be), but it’s a problem that must be anticipated and managed before harm occurs and class action and other lawyers and managers jump in

Let’s get our terms straight. The IEEE journal Technology and Society defines discrimination as “the unfair or unequal treatment of an individual (or group) based on certain protected characteristics (also known as protected attributes) such as income, education, gender, or ethnicity.” The same journal defines bias as “a deviation from the standard, sometimes necessary to identify the existence of some statistical patterns in the data or language used.” It’s important to note that bias doesn’t always lead to discrimination, but it certainly can, and for this reason alone, it’s imperative to recognize it and mitigate it to the fullest extent possible.

AI bias can be introduced in three ways: during modeling, during training and through usage. In the case of modeling, bias may be intentionally introduced into the process to produce a desired outcome or to compensate for some other system factors. Training bias is the processes whereby algorithms learn to make decisions based on both past decisions and historical data. That is, if a system has always generated an outcome that contains prejudices, it’s likely that, as a result of its own historical practices, subsequent results will reflect the same biases as did the original results. Usage bias occurs when data is used in a situation for which it was not intended. For example, according to the IEEE journal, “an algorithm utilized to predict a particular outcome in a given population can lead to inaccurate results when applied to a different population. One final kicker—"the potential misinterpretation of an algorithm’s outputs can lead to biased actions through what is called interpretation bias,” also a potential minefield.

While bias does not always lead to discriminatory outcomes, there are staggering statistics garnered particularly from the world of human resources. In its 2025 annual bias report, the resource Allaboutai.com reveals these statistics regarding the root causes of AI bias:

If this doesn’t terrify you, it should. 

Here are a few other vital statistics from the report. Please note that these exclude biases in hiring decisions when AI tools are used.

The bottom line is this: Mitigating bias in generated content is essential in order to create and ultimately apply useful AI-generated data.

AI, Bias and the Law

Doing what’s right is not the same as what’s legally permissible or tolerable. With that in mind, there are some legal considerations that should be top of mind, particularly as deployment of AI systems becomes more widespread. While a brief summary of issues associated with bias in AI has already been presented, it’s important to recognize that bias does not necessarily mean—or lead to—discrimination. But bias, in its positive form, is necessary to classify and identify differences.

However, the source of much of the class action litigation related to AI has been based on perceived unlawful discrimination. Specifically, the legal questions raised require clear answers on:

 1) the relevant population affected by the discrimination, and to which groups the relevant population(s) should be compared;

2) the circumstances that formalized group under-representation, e.g., disparate treatment or disparate impact; and

3) the threshold that constitutes clear evidence of discrimination.

According to the previously cited IEEE journal “whether an algorithm can be considered discriminatory or not depends on the context in which it is being deployed and the task it is intended to perform.” The study continues, “this entails first confirming that the algorithm’s underlying assumptions and its modeling are not biased; second, that its training and test data do not include biases and prejudices; and finally, that it is adequate to make decisions for that specific context and task.” One final important point: “attesting that an algorithmic process is free from biases does not ensure a nondiscriminatory algorithmic output, since discrimination can arise as a consequence of biases in training or in usage.” 

Antidiscrimination laws in the US are described in the Title VII of the Civil Rights Act of 1964 and other federal and state statutes, supplemented by court decisions. For instance, Title VII prohibits discrimination in employment on the basis of race, sex, national origin, and religion; and The Equal Pay Act prohibits wage disparity based on sex by employers and unions.

Lastly, but still worthy of consideration, is that discrimination can result in unfair outcomes. But what’s considered unfair today may be viewed differently as times and circumstances change. Included in those circumstances are changes in technology which occur much more rapidly than do changes in regulation, law and behavior. Nonetheless, deployers of AI tools need to not only consider these components before signing on the dotted line, but they must remain vigilant to avoid discrimination in an ongoing and systematic way. Ethical and moral business practices and outcomes absolutely require it.

Originally published on August 11, 2025 in No Jitter

Article originally appeared on Martha Buyer Telecommunications Law (https://www.marthabuyer.com/).
See website for complete article licensing information.