Explanation of Black Box Models by E-SHAP for Clear Decision-Making in Healthcare
Main Article Content
Abstract
Introduction:SHapley Additive explanations (SHAP), is a widely employed method of explaining how sophisticated Artificial Intelligence (AI) models. Compared to other explanation and attention mechanisms, SHAP performs better in terms of consistency and the ability to explain individual predictions and the overall model. However, it is computationally intensive and may consume a lot of computing power, particularly for large or complex models.
Objectives: The main objective of this work is to minimize the computational complexity of models and to computational time.
Methods:This article applies SHAP which delves into make their decisions, specifically in healthcare to diagnose heart disease. We introduce a revised version named Enhanced SHAP (E-SHAP), which makes the computation faster while maintaining the explanations reliable
Results:We experimented with E-SHAP on heart disease and credit scoring datasets and observed that it brings the computation time down by 35% without losing significant detail in the explanations
Conclusion:E-SHAP explanations were more reliable and easier to grasp, according to feedback from financial experts and medical professionals. E-SHAP makes AI more useful for real-world applications by enabling quicker and more accurate explanations without compromising accuracy.