Secflh- Defending Federated Learning-Based IoT Health Prediction Systems Against Poisoning Attacks
Liyanage, Sanoj; Weerawardhane, Venuranga; Kheminda, Jalitha; Siriwardhana, Yushan; Weerasinghe, Thilina; Liyanage, Madhusanka
Liyanage, Sanoj
Weerawardhane, Venuranga
Kheminda, Jalitha
Siriwardhana, Yushan
Weerasinghe, Thilina
Liyanage, Madhusanka
IEEE
S. Liyanage, V. Weerawardhane, J. Kheminda, Y. Siriwardhana, T. Weerasinghe and M. Liyanage, "Secflh- Defending Federated Learning-Based IoT Health Prediction Systems Against Poisoning Attacks," 2025 Joint European Conference on Networks and Communications & 6G Summit (EuCNC/6G Summit), Poznan, Poland, 2025, pp. 751-756, doi: 10.1109/EuCNC/6GSummit63408.2025.11037014
https://rightsstatements.org/vocab/InC/1.0/
© 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
© 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:oulu-202506305022
https://urn.fi/URN:NBN:fi:oulu-202506305022
Tiivistelmä
Abstract
Poisoning attacks in Federated Learning (FL) train the model to learn towards a malicious objective. While existing defenses against poisoning attacks are effective, their performance substantially degrades with the presence of non-IID (Independent and Identically Distributed) data. This paper introduces SecFLH, a novel defense mechanism for FL systems designed to counter targeted model poisoning attacks, particularly in non-IID data environments often encountered in healthcare IoT applications. Unlike traditional aggregation defenses, SecFLH employs a multi-step approach, incorporating cosine distance analysis, HDBSCAN clustering, centroid selection, and adaptive clipping to effectively isolate and exclude malicious client updates. Experimental results on benchmark datasets, including MNIST, CIFAR-10, and real-world healthcare data, validate SecFLH's robustness in maintaining model accuracy even with a high percentage of malicious clients. The proposed algorithm demonstrates resilience across varying non-IID scenarios, highlighting its practical potential for secure FL applications in dynamic, distributed environments.
Poisoning attacks in Federated Learning (FL) train the model to learn towards a malicious objective. While existing defenses against poisoning attacks are effective, their performance substantially degrades with the presence of non-IID (Independent and Identically Distributed) data. This paper introduces SecFLH, a novel defense mechanism for FL systems designed to counter targeted model poisoning attacks, particularly in non-IID data environments often encountered in healthcare IoT applications. Unlike traditional aggregation defenses, SecFLH employs a multi-step approach, incorporating cosine distance analysis, HDBSCAN clustering, centroid selection, and adaptive clipping to effectively isolate and exclude malicious client updates. Experimental results on benchmark datasets, including MNIST, CIFAR-10, and real-world healthcare data, validate SecFLH's robustness in maintaining model accuracy even with a high percentage of malicious clients. The proposed algorithm demonstrates resilience across varying non-IID scenarios, highlighting its practical potential for secure FL applications in dynamic, distributed environments.
Kokoelmat
- Avoin saatavuus [38841]