Explainable Ai and Machine Learning Models for Transparent and Scalable Intrusion Detection Systems

Main Article Content

Muhammad Furqan Khan, Mehedi Hassan, Sharmin Ferdous, Imran Hussain, Lamia Akter, Amit Banwari Gupta

Abstract

The continual emergence of more advanced cyber threats has necessitated the use of Intrusion Detection Systems (IDS) as an important part of the protective measures taken against the threats. Although the traditional machine learning (ML) models have demonstrated the possibility to detect anomalies and malicious behaviors effectively, their black box approach undermines trust and application, particularly in the critical settings, where such models are not permitted, i.e., finance, healthcare and defense. In this paper, the article is going to discuss the necessity of transparent and scalable IDS, which may be achieved by introducing Explainable Artificial Intelligence (XAI) approaches to the applicability of state-of-art ML models to increase the interpretability and operational efficiency.


To achieve our goal, we propose a hybrid framework with specific supervised learning models, e. g. Random Forest, Support Vector Machine, Gradient Boosting, and explain ability methods SHAP (Shapley Additive explanations) and LIME (Local Interpretable Model-Agnostic Explanations). Accuracy, interpretability, and scalability are assessed with the help of benchmark datasets (NSL-KDD and CICIDS2017) to carry out an evaluation of the framework. The most important metrics, such as accuracy, F1-score, and explanation fidelity, are discussed, and the resulting metrics can be seen as pie charts, bar graphs, and plots of feature importance.)


Analyses reveal that ensemble models achieve high detection levels; however, the use of XAI increased both interpretability drastically without performance reduction. A lightweight and flexible architecture of the proposed system achieves larger scale, real-time intrusion detection in most network environments today.


This paper emphasizes the importance of explainability in cybersecurity, claiming that trust and transparency are as important as the performance in predictions. Our results provide a way ahead to bake in interpretable AI into security infrastructures facilitating quicker response to threats, compliance with regulations, and trust more widely.

Article Details

Section
Articles