An Explainable Deep Learning Model Combining Integrated Gradients, GradientSHAP and Occlusion for Breast Cancer Detection
Main Article Content
Abstract
Breast cancer, a prominent cause of female mortality, underscores the critical need for tim intervention. Researchers and professionals are continually developing models for early breast cancer detection, yet the aspect of interpretation remains underexplored. This research is motivated by the desire to visually interpret breast cancer diagnoses for pathologists and even individuals with a layman's interest. The study utilizes the BreakHis dataset from the Kaggle and employs a ResNet50 model through deep learning to classify breast tumors as either malignant or benign. The model achieved a notable 96.84% accuracy in testing, surpassing results obtained by other researchers using similar explainable deep learning methodologies.
Furthermore, this research goes beyond mere classification by employing eXplainable Artificial Intelligence (XAI) techniques—Integrated Gradient (IG), GradientShap (GS), and Occlusion with CNN Model to interpret breast cancer diagnoses from histopathological images. These techniques provide insights into why a specific histopathological image is categorized as benign or malignant. Among the deep learning technique CNN and with explainable AI techniques XAI, XAI stands out for its superior predictive results based on visualization Unlike earlier studies, this research not only classifies histopathological images but also offers transparent reasons for its outcomes, enhancing the understanding of breast cancer diagnoses.