What is AI Explainability and why need Explainable AI? An Overview
If you are using AI to add additional capabilities to your product, then you might be wondering about a few things:
- Why are my model results wrong? (or right for that matter)
- Why is the model not performing? (accurately, and other unexpected behaviors)
- How can I understand how the model gave output?
- How good is my data? What will happen when my data continues to evolve?
In other words, instead of a black-box technique of model producing results, we try to uncover the internal decision-making process of a model or system. The techniques used to make these decisions more transperent is called Explainable AI.
Why make AI Explainable?
There are many benefits for various stake holders:
- Continuous improvement of models, Reducing Model drift
- More accurate model results (predictions, classifications, text, image, video )
- Reduced bias, hallucinations
- Trust and Adoption of AI
So what do I need to make my AI explainable?
- Explainable data. That means you need to know everything about your data first before you see how AI is using your datasets
- Explainable predictions: AI should be able to give a detailed explanation of why the model made a specific prediction, what features were taken into consideration, what weight was given to each of them, confidence scores etc
- Explainable algorithms/models — There are some linear models (like decision trees) which are easy to trace and understand, but as complexity of the models increase, it’s hard to understand model behaviours and various layers of models. So how can we make Generative AI more explainable? care to comment?
Prediction: I think the current state of validating and testing ML systems is not that great. One would think if you can UNIT test an AI system, then you should be able to explain it? So testing and explainability will go hand in hand in future, and systems built for explainable AI will also provide ML testing and validation.