Interpretable Ensemble Models: A Feature Contribution Analysis with Explainable AI

Main Article Content

Thushara L, Abdul Jabbar P, Pushpalatha K P

Abstract

Ensemble learning offers improved predictive accuracy compared to its constituents but often lacks transparency in understanding the influence of individual learners. This work addresses this challenge by applying Explainable Artificial Intelligence (XAI) techniques to evaluate feature contributions within ensemble models. In this paper, an ensemble model, developed using stacking techniques, is used for the feature relevance analysis.  This model is designed as a feature ensemble for yoga posture detection by combining three feature sets: keypoints, angles, and Hu moments.  This analysis is performed with two explainable techniques, SHAP (SHapley Additive exPlanations) and the Permutations of feature importance method. The results highlight not only the important features but also reveal varying contribution patterns depending on the XAI technique used. Both of the methods put forward Hu moments as the most contributing feature. This investigation could promote the use of the underexplored feature, Hu moments, in further studies of posture recognition. 

Article Details

Section
Articles