Have a personal or library account? Click to login
Enhancing Intrusion Detection with Explainable AI: A Transparent Approach to Network Security Cover

Enhancing Intrusion Detection with Explainable AI: A Transparent Approach to Network Security

Open Access
|Mar 2024

Abstract

An Intrusion Detection System (IDS) is essential to identify cyber-attacks and implement appropriate measures for each risk. The efficiency of the Machine Learning (ML) techniques is compromised in the presence of irrelevant features and class imbalance. In this research, an efficient data pre-processing strategy was proposed to enhance the model’s generalizability. The class dissimilarity is addressed using k-Means SMOTE. After this, we furnish a hybrid feature selection method that combines filters and wrappers. Further, a hyperparameter-tuned Light Gradient Boosting Machine (LGBM) is analyzed by varying the optimal feature subsets. The experiments used the datasets – UNSW-NB15 and CICIDS-2017, yielding an accuracy of 90.71% and 99.98%, respectively. As the transparency and generalizability of the model depend significantly on understanding each component of the prediction, we employed the eXplainable Artificial Intelligence (XAI) method, SHapley Additive exPlanation (SHAP), to improve the comprehension of forecasted results.

DOI: https://doi.org/10.2478/cait-2024-0006 | Journal eISSN: 1314-4081 | Journal ISSN: 1311-9702
Language: English
Page range: 98 - 117
Submitted on: Sep 27, 2023
Accepted on: Dec 14, 2023
Published on: Mar 23, 2024
Published by: Bulgarian Academy of Sciences, Institute of Information and Communication Technologies
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2024 Seshu Bhavani Mallampati, Hari Seetha, published by Bulgarian Academy of Sciences, Institute of Information and Communication Technologies
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.