Comprehensive Analysis Over Centralized and Federated Learning-Based Anomaly Detection in Networks with Explainable AI (XAI)
Rumesh, Yasintha; Senevirathna, Thulitha Theekshana; Porambage, Pawani; Liyanage, Madhusanka; Ylianttila, Mika (2023-10-23)
Rumesh, Yasintha
Senevirathna, Thulitha Theekshana
Porambage, Pawani
Liyanage, Madhusanka
Ylianttila, Mika
IEEE
23.10.2023
Y. Rumesh, T. T. Senevirathna, P. Porambage, M. Liyanage and M. Ylianttila, "Comprehensive Analysis Over Centralized and Federated Learning-Based Anomaly Detection in Networks with Explainable AI (XAI)," ICC 2023 - IEEE International Conference on Communications, Rome, Italy, 2023, pp. 4853-4859, doi: 10.1109/ICC45041.2023.10278845
https://rightsstatements.org/vocab/InC/1.0/
© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:oulu-202401171282
https://urn.fi/URN:NBN:fi:oulu-202401171282
Tiivistelmä
Abstract
Many forms of machine learning (ML) and artificial intelligence (AI) techniques are adopted in communication networks to perform all optimizations, security management, and decision-making tasks. Instead of using conventional blackbox models, the tendency is to use explainable ML models that provide transparency and accountability. Moreover, Federate Learning (FL) type ML models are becoming more popular than the typical Centralized Learning (CL) models due to the distributed nature of the networks and security privacy concerns. Therefore, it is very timely to research how to find the explainability using Explainable AI (XAI) in different ML models. This paper comprehensively analyzes using XAI in CL and FL-based anomaly detection in networks. We use a deep neural network as the black-box model with two data sets, UNSW-NB15 and NSLKDD, and SHapley Additive exPlanations (SHAP) as the XAI model. We demonstrate that the FL explanation differs from CL with the client anomaly percentage.
Many forms of machine learning (ML) and artificial intelligence (AI) techniques are adopted in communication networks to perform all optimizations, security management, and decision-making tasks. Instead of using conventional blackbox models, the tendency is to use explainable ML models that provide transparency and accountability. Moreover, Federate Learning (FL) type ML models are becoming more popular than the typical Centralized Learning (CL) models due to the distributed nature of the networks and security privacy concerns. Therefore, it is very timely to research how to find the explainability using Explainable AI (XAI) in different ML models. This paper comprehensively analyzes using XAI in CL and FL-based anomaly detection in networks. We use a deep neural network as the black-box model with two data sets, UNSW-NB15 and NSLKDD, and SHapley Additive exPlanations (SHAP) as the XAI model. We demonstrate that the FL explanation differs from CL with the client anomaly percentage.
Kokoelmat
- Avoin saatavuus [38865]