An XAI-Driven Support System to Enhance the Detection and Diagnosis of Liver Tumor for Interventional Radiologist
Main Article Content
Abstract
In healthcare, the use of opaque deep learning models often results in limited transparency, potential bias, and inaccuracies, leading to a lack of trust among healthcare providers and patients. To address these challenges, this work integrates Explainable Artificial Intelligence (XAI) methods to enhance the transparency and interpretability of AI models, particularly in liver tumor segmentation. By employing XAI techniques, such as GradCAM (Gradient-weighted Class Activation Mapping), the proposed approach provides visual explanations that highlight the most critical regions influencing the model's predictions. This study focuses on combining state-of-the-art deep learning models, achieving a high accuracy of 99%, to ensure precise and reliable segmentation of liver tumors. GradCAM further enhances this process by generating heatmaps that explain the AI's decision-making, fostering trust and reliability among medical professionals. Beyond segmentation, the framework extends to decision support systems that offer transparent insights into medical decision-making, predictive analytics for patient outcome forecasting, and natural language processing for analyzing medical data. This approach ultimately empowers interventional medical professionals with accurate, interpretable, and trustworthy AI solutions, transforming how liver tumors are analyzed and segmented