Blog Details

Interpretable AI

Rory Donovan
January 20, 2023
0 comments

Over View

As we previously examined the abstract and importance of transparency, we now delve deeper into the concept of interpretable AI. Interpretability plays a crucial role in the transparency of your ML (AI). What is interpretability – we must first start with three important definitions ML managers and developers must understand and apply.  

Interpretability Concepts 

Scrutability

Scrutability is the model’s capability of providing human understanding based on two concepts: decomposability and simulatability. The concept of decomposability (also termed intelligibility) assesses how well the model can be broken down into its input, parameters, and output. Simulatability is defined by the level of knowledge an individual possesses about each aspect of the model based on the capacity a model allows users to understand of its structure and function. Decomposability paired with simulatability creates scrutability. This challenges models that require large computations and are riddled with anonymous features to be understood or transparent.

Reproducibility

Reproducibility is judged at the level of the entire model and assesses how easily it can be adjusted. For an AI system to be considered transparent according to this definition, it must have a simple model. This means all the calculations and modifications involved in the model can be understood and evaluated by a human within a reasonable amount of time.

Post Hoc

Post Hoc makes the model useful to end-users, even if they may not understand how it works. In this method, end-users of the ML models are provided with criteria for extracting useful information from the AI systems, but not with a true understanding of how the model works. Approaches to post hoc interpretability include: 

  1. Natural language explanations 
  2. Learned representations/models visualizations 
  3. Explanations by examples 

The Interpretability-Reliability Tradeoff 

Often an AI system improves its interpretability at the expense of its accuracy. As a result, the better a system functions as a classifier, the more likely its processes might fall into the category of a black box—impenetrable to human understanding. However, one could argue that this trade-off does not necessarily exist. Instead, complex algorithms can be both interpretable and accurate when highly structured data are modeled using features that have naturally meaningful representations. In such cases, no statistically significant difference can be observed between the complex algorithms of deep-learning methods and interpretable methods such as linear and logistic regressions, decision trees, and lists. 

Interpretability’s Role in Fairness 

Humans desire interpretable AI systems for many reasons: curiosity satisfaction, knowledge acquisition, justification of answers, bias mitigation, imposition of security and privacy measures, social appreciation, and acceptance. When it comes to automated decision processes that might adversely affect or disproportionately disadvantage individuals and minority communities, human intervention correcting disparity requires insight into the processes that drive those decisions.  

Interested in learning more about how to develop ethical AI? Our firm can help you put best practices in place to better serve your customers?  Contact us! Quickly develop ethical AI that is explainable, equitable, and reliable with help from our complete AI IaaS. Sign up for FREE diagnostics.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *