PipeSFL: A Fine-Grained Parallelization Framework for Split Federated Learning on Heterogeneous Clients
Gao, Yunqi; Hu, Bing; Mashhadi, Mahdi Boloursaz; Wang, Wei; Bennis, Mehdi (2024-10-31)
Gao, Yunqi
Hu, Bing
Mashhadi, Mahdi Boloursaz
Wang, Wei
Bennis, Mehdi
IEEE computer society press
31.10.2024
Y. Gao, B. Hu, M. B. Mashhadi, W. Wang and M. Bennis, "PipeSFL: A Fine-Grained Parallelization Framework for Split Federated Learning on Heterogeneous Clients," in IEEE Transactions on Mobile Computing, vol. 24, no. 3, pp. 1774-1791, March 2025, doi: 10.1109/TMC.2024.3489642.
https://rightsstatements.org/vocab/InC/1.0/
© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:oulu-202502061478
https://urn.fi/URN:NBN:fi:oulu-202502061478
Tiivistelmä
Abstract
Split Federated Learning (SFL) improves scalability of Split Learning (SL) by enabling parallel computing of the learning tasks on multiple clients. However, state-of-the-art SFL schemes neglect the effects of heterogeneity in the clients’ computation and communication performance as well as the computation time for the tasks offloaded to the cloud server. In this paper, we propose a fine-grained parallelization framework, called PipeSFL, to accelerate SFL on heterogeneous clients. PipeSFL is based on two key novel ideas. First, we design a server-side priority scheduling mechanism to minimize per-iteration time. Second, we propose a hybrid training mode to reduce per-round time, which employs asynchronous training within rounds and synchronous training between rounds. We theoretically prove the optimality of the proposed priority scheduling mechanism within one round and analyze the total time per round for PipeSFL, SFL and SL. We implement PipeSFL on PyTorch. Extensive experiments on seven 64-client clusters with different heterogeneity demonstrate that at training speed, PipeSFL achieves up to 1.65x and 1.93x speedup compared to EPSL and SFL, respectively. At energy consumption, PipeSFL saves up to 30.8% and 43.4% of the energy consumed within each training round compared to EPSL and SFL, respectively.
Split Federated Learning (SFL) improves scalability of Split Learning (SL) by enabling parallel computing of the learning tasks on multiple clients. However, state-of-the-art SFL schemes neglect the effects of heterogeneity in the clients’ computation and communication performance as well as the computation time for the tasks offloaded to the cloud server. In this paper, we propose a fine-grained parallelization framework, called PipeSFL, to accelerate SFL on heterogeneous clients. PipeSFL is based on two key novel ideas. First, we design a server-side priority scheduling mechanism to minimize per-iteration time. Second, we propose a hybrid training mode to reduce per-round time, which employs asynchronous training within rounds and synchronous training between rounds. We theoretically prove the optimality of the proposed priority scheduling mechanism within one round and analyze the total time per round for PipeSFL, SFL and SL. We implement PipeSFL on PyTorch. Extensive experiments on seven 64-client clusters with different heterogeneity demonstrate that at training speed, PipeSFL achieves up to 1.65x and 1.93x speedup compared to EPSL and SFL, respectively. At energy consumption, PipeSFL saves up to 30.8% and 43.4% of the energy consumed within each training round compared to EPSL and SFL, respectively.
Kokoelmat
- Avoin saatavuus [38841]