Distributed Learning Methodologies for Massive Machine Type Communication
Da Silva, Matheus Valente; Eldeeb, Eslam; Shehab, Mohammad; Alves, Hirley; Souza, Richard Demo (2024-12-24)
Da Silva, Matheus Valente
Eldeeb, Eslam
Shehab, Mohammad
Alves, Hirley
Souza, Richard Demo
IEEE
24.12.2024
M. V. da Silva, E. Eldeeb, M. Shehab, H. Alves and R. D. Souza, "Distributed Learning Methodologies for Massive Machine Type Communication," in IEEE Internet of Things Magazine, vol. 8, no. 1, pp. 102-108, January 2025, doi: 10.1109/IOTM.001.2400093.
https://creativecommons.org/licenses/by/4.0/
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
https://creativecommons.org/licenses/by/4.0/
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
https://creativecommons.org/licenses/by/4.0/
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:oulu-202504142586
https://urn.fi/URN:NBN:fi:oulu-202504142586
Tiivistelmä
Abstract
Massive machine-type communication (MTC) addresses new IoT use cases such as connected vehicles, smart agriculture, and factory automation. However, major concerns face massive MTC applications, such as data privacy, latency constraints, and communication overhead. Distributed learning, an emerging technology that enables edge users to train various machine learning (ML) models without sharing raw data, is very promising for overcoming such concerns. However, distrib-uted learning still faces some difficulties related to the amount of training data, accuracy, complexity, and dynamic wireless environment. Herein, we elucidate different distributed learning methodologies, their limitations, and case studies in the spirit of massive MTC. Specifically, the article showcases the pros of Federated Learning (FL) in the context of dense MTC-based handwritten digits classification. After that, we highlight the differences between F1 and Federated Distillation (FD) and proceed to split, transfer, and meta-learning. Then, we discuss multi-agent reinforcement learning (MARL) in a UAV swarm empowered smart agriculture scenario. Finally, we present some challenges and future directions for distributed learning research in the context of massive MTC.
Massive machine-type communication (MTC) addresses new IoT use cases such as connected vehicles, smart agriculture, and factory automation. However, major concerns face massive MTC applications, such as data privacy, latency constraints, and communication overhead. Distributed learning, an emerging technology that enables edge users to train various machine learning (ML) models without sharing raw data, is very promising for overcoming such concerns. However, distrib-uted learning still faces some difficulties related to the amount of training data, accuracy, complexity, and dynamic wireless environment. Herein, we elucidate different distributed learning methodologies, their limitations, and case studies in the spirit of massive MTC. Specifically, the article showcases the pros of Federated Learning (FL) in the context of dense MTC-based handwritten digits classification. After that, we highlight the differences between F1 and Federated Distillation (FD) and proceed to split, transfer, and meta-learning. Then, we discuss multi-agent reinforcement learning (MARL) in a UAV swarm empowered smart agriculture scenario. Finally, we present some challenges and future directions for distributed learning research in the context of massive MTC.
Kokoelmat
- Avoin saatavuus [38841]