What is Explainable AI

Explainable AI – A Key to the Practical Success of AI and ML

Artificial intelligence (AI) and machine learning (ML) often generate answers or predictions with no explanation or reasoning.  In many cases, even the data scientists — who know the inner workings of the AI/ML platform — are unable to explain the reason behind the platform-generated answers and predictions. One of the best examples of this black-box lack of explainability is ChatGPT, which gives amazing answers that are usually correct, but cannot clearly explain how those answers were generated.

In contrast, explainable AI provides relatively simple explanations or reasons behind answers or predictions and is a key component of Transparent AI.

Why is Explainable AI important?

The biggest reason why explainable AI is important is trust. 

AI can only be fully leveraged when it is trusted. Subject matter experts will discount the value of AI if it provides answers and predictions that contradict their expertise. This will be true whether the subject matter expert is a loan officer or a surgeon. Citizens will demand that the government limit or outlaw certain AI applications if they don’t trust it. We’ve already seen various limits and even outright bans on AI-based facial recognition in certain states and cities.

Trust is fundamental to the acceptance of the recommendations of any expert – human or AI. This need for trust is the driving force behind various laws and regulations that grant individuals a “right to explanation”.  For example —

  • The U.S. banking and financial services industry is required by law to give an explanation to creditors who are denied credit with the specific reasons for the denial (Equal Credit Opportunity Act, Title 12, Chapter X, Part 1002, §1002.9). 
  • U.S. insurance companies are required to explain their rate and coverage decisions to their customers.
  • The European General Data Protection Regulation (GDPR) mandates that explanations be available for algorithm-based decision-making. This is particularly important as GDPR may apply to U.S. businesses that do business in the EU or that process the personal data of EU citizens.

Confidence scores

One of the issues with explainable AI is determining what constitutes an adequate explanation.

According to Jim Guszcza, Chief Data Scientist at Deloitte, humans comprehend explanations best as rules-and-thresholds. Each reason output by the AI/ML platform should include a confidence score that rates the believability of the reason — to let a human decide whether to act on that reason (high confidence score) or not (low confidence score).

Based on training data, confidence scores are typically expressed as percentages, reflecting how strongly we should rely on an AI recommendation.  For example, a confidence score over 70% reflects a high confidence in the prediction or recommendation. In contrast, a 30% confidence score suggests that the recommendation may be correct, but that the level of certainty is low.

In summary, confidence scores quantify explainability. Without a confidence score, AI explanations are academic – not actionable.

Counterfactuals – How to change a “no” answer to a “yes”

One of the problems with explainable AI is that people don’t always want a reason for an AI recommendation.  They want the answer to be changed!

For example, if your company wants a loan, and you believe that your company can repay the loan, you’re not going to be happy if the AI platform tells the loan officer to deny the loan – even if it gives reasons. You may be more willing to accept the answer, however, if the system can also give some guidelines for what changes you could make in your business in order to get the loan approved. This information on how to change a recommendation is called a counterfactual.

Explainable AI provides the reasons for a recommendation or prediction, but we take it a step further. Counterfactual inferencing explains how to identify the optimal way to change the recommendation, including what features to change and by how much.  As with the original explanations, the counterfactuals should be given (using an LLM) in natural language.

Celsior Explainable AI / ML Solution

Our Explainable AI / ML solution is a leading example of a platform that provides explainable AI, confidence scores, and counterfactuals. These benefits are available whether you use it as a stand-alone platform or alongside your existing AI/ML systems.

If you are interested, we would be happy to provide more information about our solution or to give you a demo of it.

To explore the full set of strategic AI recommendations for financial institutions — including explainability, governance, and adoption frameworks — download our white paper, Banking on AI.

Download the white paper

MORE BLOGS

BLOG
more
How AI-Powered QE Transforms DevOps and Decision-Making

Enabling smarter testing and faster releases

Learn More
BLOG
more
Transitioning from Quality Assurance to Quality Engineering

Driving digital transformation and improving customer experience

Learn More
An engineer working on a quality engineering strategy
BLOG
more
AI in Recruitment | Sentiments and Misconceptions  

Debunking common myths around the use of AI in recruitment

Learn More