Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
Three Best Practices for Unilever’s Global Analytics Initiatives
    This article from Morgan Vawter, Global Vice...
Getting Machine Learning Projects from Idea to Execution
 Originally published in Harvard Business Review Machine learning might...
Eric Siegel on Bloomberg Businessweek
  Listen to Eric Siegel, former Columbia University Professor,...
Effective Machine Learning Needs Leadership — Not AI Hype
 Originally published in BigThink, Feb 12, 2024.  Excerpted from The...
SHARE THIS:

5 years ago
Explainable Artificial Intelligence (Part 2) – Model Interpretation Strategies

 

Originally published in KDnuggets, December 2018

For today’s leading deep learning methods and technology, attend the conference and training workshops at Deep Learning World, June 16-19, 2019 in Las Vegas.

Introduction

This article is a continuation in my series of articles aimed at ‘Explainable Artificial Intelligence (XAI)’. If you haven’t checked out the first article, I would definitely recommend you to take a quick glance at ‘Part I — The Importance of Human Interpretable Machine Learning’ which covers the what and why of human interpretable machine learning and the need and importance of model interpretation along with its scope and criteria. In this article, we will be picking up from where we left off and expand further into the criteria of machine learning model interpretation methods and explore techniques for interpretation based on scope. The aim of this article is to give you a good understanding of existing, traditional model interpretation methods, their limitations and challenges. We will also cover the classic model accuracy vs. model interpretability trade-off and finally take a look at the major strategies for model interpretation.

Briefly, we will be covering the following aspects in this article.

  • Traditional Techniques for Model Interpretation
  • Challenges and Limitations of Traditional Techniques
  • The Accuracy vs. Interpretability trade-off
  • Model Interpretation Techniques

This should get us set and ready for the detailed hands-on guide to model interpretation coming in Part 3, so stay tuned!

Traditional Techniques for Model Interpretation

Model interpretation at heart, is to find out ways to understand model decision making policies better. This is to enable fairness, accountability and transparency which will give humans enough confidence to use these models in real-world problems which a lot of impact to business and society. Hence, there are techniques which have existed for a long time now, which can be used to understand and interpret models in a better way. These can be grouped under the following two major categories.

Let’s briefly take a more detailed look at these techniques.

To continue reading this article on KDnuggets, click here.

About the Author

Data Scientist, Published Author, Mentor & Trainer. Have an interesting proposal, research or opportunity, feel free to message me or email me at dipanzan [dot] sarkar [at] gmail [dot] com

 

Leave a Reply