Blog Details

Ethical AI Terminology

Rory Donovan
January 20, 2023
0 comments

Over View

As a prelude to future discussions about ethical AI, let’s look at three terms you must know to be conversant in Ethical AI.  These terms – and their meaning – are essential for legitimate AI fluency for a broad cross-section of the AI ecosystem – developers, government officials, and corporate leadership, among others: AI Fairness (related closely to Bias), AI Transparency /Explainability, and AI Accountability 

Fairness 

Let’s tackle fairness, but first, let’s define it.  AI Fairness or sometimes referred to AI equitability in artificial intelligence refers to the absence of disadvantage or advantage – bias – to an individual or group.  There are various aspects of fairness – with the method of determining and guaranteeing it requiring slight software adjustments.  Bias and fairness are closely related.  When bias exists, fairness is compromised and can only be re-established by mitigating bias. 

So, what’s bias?  I define it as any situation in which individuals undergo treatment by a system – or algorithm – with the outcome for each showing a difference based on other features – often called sensitive or protected attributes like skin color or gender – that cause the AI system to act in favor of those who possess one type and disadvantage others who possess an alternative type.  While discrimination is illegal or immoral in the least, it’s likely already embedded in your AI system.   

The COMPAS dataset is a well-known example of discrimination against people of color.  Given the same situational background, the algorithm wrongly selected Blacks as more likely to re-offend (recidivism) than their White counterparts.  The algorithm, in this case, inferred race from data collected and incorrectly predicted black people were more likely to re-offend when that was not the case.   

As an example of how AI systems discriminate against certain people, you may want to check out the film Coded Bias about how unconscious bias has seeped into technology and wrongly discriminates against people of color.  Here’s the trailer.   

A final note on bias: it is impossible to remove it from the AI system altogether, but steps can be taken to minimize it.

 Transparency 

Second, let’s talk about the term transparency, used by some in place of the word explainability.  Transparent AI defines artificial intelligence models where decisions are understood – including their purpose, result, how to interpret those results, and even receive an alternative outcome known as counterfactual analysis.   

In some cases, users may not be able to realize the algorithm that a model uses during its training and deployment. Still, they may be able to engineer an alternative algorithm that generates the exact predictions that the model in question does.  If this second model can be understood intuitively, then the model has been made explainable.   

It’s important to point out that the terms transparent, interpretable, and explainable have overlapping definitions and often mean the same thing.  There are some similarities but nuanced differences in how the words are used as definitions go.  For instance, front-end developers may use understandable AI, though explainable has quickly become the industry standard.   

Let’s explore explainability a bit more.  Explainable AI is concerned with making AI more transparent.  Opening the black box shows how neural networks work – and without limitation.  Our field is multidisciplinary – combining the philosophy of ethics, mathematics, and computer science.  So, the challenge becomes explaining AI in a way that all developers – regardless of their training – understand.  If your goal is an excellent UX, you must be able to give them users that in a way that teaches fluency and explainability that interprets results – otherwise, it’s just numbers on a screen.

Accountability 

AI developed faster than government or industry regulations were established to keep developers accountable to those affected by their systems.  Today artificial intelligence has touched nearly every area of human life, which explains the importance of accountability in AI and our responsibility to ensure it.   

Accountability in AI requires the development of regulatory standards for assessing algorithms and measuring their compliance to maximize benefits and minimize costs to society and the environment.  Part of that responsibility lies within government regulations, but I would argue that accountability also rests with developers and their employers.   

Accountability boils down to whom – or what parties – are responsible for researching and building the right AI model for our industry.  That means identifying who is responsible for tasks and decision-making along the way – with each step traceable, checked, and verified.  Assigned tasks are imperative, and although we strive and aspire for a positive outcome, someone must be culpable if they fail to do their job.   

The question becomes how we break apart the processes into a system where each person is in charge of one aspect – and how we diagnose an issue when things don’t go as planned.  Accountability requires everyone to do their part, and it’s our responsibility to get it right, ethically and without harming those we serve.     

Interested in learning more about how to develop ethical AI? Our firm can help you put best practices in place to better serve your customers?  Contact us! Quickly develop ethical AI that is explainable, equitable, and reliable with help from our complete AI IaaS. Sign up for FREE diagnostics.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *