In our recent blog series, we have shared many problems with commonly used XAI algorithms. These problems appear because the commonly used XAI algorithms were developed by machine learning academics for machine learning academics. They were not built with a business context in mind, and are not fit for that purpose.
As we have seen, these XAI algorithms produce explanations which aren’t really explanations: they are feature importances rather than sets of facts. They are not human readable or actionable. And they have many technical issues which compromise their ability to function as part of a business-ready AI solution.
At Elula, our proprietary XAI technology solves these problems. Drawing inspiration from LIME and SHAP and other publicly available algorithms, we have developed several proprietary XAI algorithms which do not fall prey to these problems. We deploy these XAI algorithms in our customer retention product, Sticky, which we use to predict home loan churn. More specifically, Sticky delivers a ranked list of home loan customers, from the most likely to churn to the least likely. The XAI module then unpacks the black box of machine learning and provides explanations for the highly ranked customers, providing context for why that particular prediction was made. It then converts this into everyday business language, ready for the front-line and allowing the end user to understand what the AI is “thinking”. That context facilitates a personalised customer conversation, setting Elula’s XAI apart from common XAI algorithms in its business application.
Unlike the algorithms we have discussed, the explanations from Elula’s XAI are not feature importances, but are a minimal set of facts which explain why the models predict this customer will churn. Uniquely, they are counterfactual explanations. This is the useful, intuitive explanation required in business contexts. The users of AI decisions need to know why an AI decision was made and what facts about that case caused that decision to be made. Individual feature names are useless to most consumers of AI decisions, and the scores given to those features by LIME and SHAP are equally useless.
Our aptly named “Englishing Module” works with Elula’s XAI algorithms to produce explanations which are human-readable, are easy to understand, and are actionable. Critically, they help move predictions from a data science exercise into pointed conversations delivering tangible ROI. These are all key requirements of AI explanations in business, and these Englished explanations are used in Sticky to enable the customer service agent to have directed, meaningful, timely conversations with home loan customers. Being designed with scalability, robustness and usability as core requirements, Elula’s XAI algorithms do not fall prey to the technical limitations presented in our last blog post. By recognising and resolving these problems, we produce XAI algorithms which don’t have the flaws that allow them to be fooled, nor do they generate unrealistic counterfactuals or have complex and arbitrary parameters. Further, Elula’s XAI algorithms are entirely robust and reproducible, giving consistent answers to the same question each time. XAI is computationally intensive and requires a well optimised and engineered cloud infrastructure – you can read more about how we do this in Amazon Web Services’ recent Case Study on Elula here.
By using our XAI algorithms, we produce context for the decisions made by AI so that the end user can effectively take action. Our accurate and understandable explanations produce trust in the predictions made, allowing the end user to effectively integrate our business-ready AI solution into their workflow and have informed conversations with at-risk customers.