BLOG

Why AI Explainability is Central to Trust

Artificial Intelligence

It’s an open secret among machine learning scientists – explaining how artificial intelligence (“AI”) models get the results they do is often surprisingly difficult. The authors of the book, Prediction Machines: The Simple Economics of Artificial Intelligence, said it well:

“Predictions generated by deep learning and many other AI technologies appear to be created by a black box. It isn’t feasible to look at the algorithm or formula underlying the prediction and identify what causes what.” – Prediction Machines: The Simple Economics of Artificial Intelligence, Ajay Agarwal, Joshua Gans, and Avi Goldfarb

To many, AI still seems like science fiction, the province of PhDs and research labs. Why should CEOs and business executives care deeply about AI explainability? Because if customers don’t understand your algorithms, they can’t trust your company. 

Trust is the currency of business. Companies which provide context, ensure transparency, and maintain auditability for their AI systems and algorithms will prosper. These companies will create intelligible AI, and in turn gain their customers’ trust.

Intelligible AI Framework

Context

As described by Borealis research scientist Matthew Taylor, AI models perform differently in different contexts. We can currently provide explainability for a model’s decisions in the context for which the model was designed. It is not enough to say autonomous vehicles have driven X miles with Y crashes. What’s the context in which the crashes happened? Were they all at night? In snowy conditions? Narrowing this context is called providing a local explanation of a model – how it operates in a particular set of conditions. Generalizing model results from local to “global” conditions is often very challenging due to the different contexts involved.

 

Transparency

Taylor argues that another way to ensure intelligibility is to make models inherently interpretable. For example, the black box algorithm COMPAS is being used for assessing the risk of recidivism and has been accused of being racially biased by an influential ProPublica article. In Cynthia Rudin’s article, “Please Stop Explaining Black Box Models for High-Stakes Decisions,” she argues that her model (CORELS) achieves the same accuracy as COMPAS but is fully understandable, as it consists of only 3 rules and does not take race into account. It doesn’t take a data scientist to understand the inputs to the model below.

Rule list to predict 2-year recidivism rate found by CORELS

Logic diagram

More technical users should also be able to understand the algorithm. Does it converge on a solution (i.e., get closer and closer to a specified value after many iterations)? Are the model inputs reasonable? Can one calculate the outputs from the inputs?

 

Auditability

Today, investors and the public rely on auditing firms such as Deloitte to hold companies accountable for the integrity of financial reports. I predict that these accounting firms will soon develop a data science auditing practice. Government bodies and industry experts will set standards for responsible AI.

Graph on transparency trust

Auditing includes techniques to test the reasonableness of financial records. Similarly, approaches are emerging to ensure the reasonableness and intelligibility of AI models. For example, one could directly observe the results of a model in different conditions or states, or examine its human-readable decision rules. Or one could use visualization techniques to summarize results of the model in an easy-to-understand fashion.

The end goal is to help users understand, and trust, AI algorithms. Fostering trust will encourage thoughtful implementation of AI, ultimately driving growth, enhancing the customer experience, and enabling human progress. And that’s good for business.

One Response

  1. Good answers in return of this question with real arguments and
    describing the whole thing about that.

Comments are closed.