Online Optimization for Over-the-Air Federated Learning with Energy Harvesting
An, Qiaochu; Zhou, Yong; Wang, Zhibin; Shan, Hangguan; Shi, Yuanming; Bennis, Mehdi (2023-12-12)
An, Qiaochu
Zhou, Yong
Wang, Zhibin
Shan, Hangguan
Shi, Yuanming
Bennis, Mehdi
IEEE
12.12.2023
Q. An, Y. Zhou, Z. Wang, H. Shan, Y. Shi and M. Bennis, "Online Optimization for Over-the-Air Federated Learning With Energy Harvesting," in IEEE Transactions on Wireless Communications, vol. 23, no. 7, pp. 7291-7306, July 2024, doi: 10.1109/TWC.2023.3339298.
https://rightsstatements.org/vocab/InC/1.0/
© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists,or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists,or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:oulu-202403262439
https://urn.fi/URN:NBN:fi:oulu-202403262439
Tiivistelmä
Abstract
Federated learning (FL) is recognized as a promising privacy-preserving distributed machine learning paradigm, given its potential to enable collaborative model training among distributed devices without sharing their raw data. However, supporting FL over wireless networks confronts the critical challenges of periodically executing power-hungry training tasks on energy-constrained devices and transmitting high-dimensional model updates over spectrum-limited channels. In this paper, we reap the benefits of both energy harvesting (EH) and over-the-air computation (AirComp) to alleviate the battery limitation by harvesting ambient energy to support both the training and transmission of local models, and to achieve low-latency model aggregation by concurrently transmitting local gradients via AirComp. We characterize the convergence of the proposed FL by deriving an upper bound of the expected optimality gap, revealing that the convergence depends on the accumulated errors due to partial device participation and model distortion, both of which further depend on dynamic energy levels. To accelerate the convergence, we formulate a joint AirComp transceiver design and device scheduling problem, which is then tackled by developing an efficient Lyapunov-based online optimization algorithm. Simulations demonstrate that, by appropriately scheduling devices and allocating energy across multiple communication rounds, our proposed algorithm achieves a much better learning performance than benchmarks.
Federated learning (FL) is recognized as a promising privacy-preserving distributed machine learning paradigm, given its potential to enable collaborative model training among distributed devices without sharing their raw data. However, supporting FL over wireless networks confronts the critical challenges of periodically executing power-hungry training tasks on energy-constrained devices and transmitting high-dimensional model updates over spectrum-limited channels. In this paper, we reap the benefits of both energy harvesting (EH) and over-the-air computation (AirComp) to alleviate the battery limitation by harvesting ambient energy to support both the training and transmission of local models, and to achieve low-latency model aggregation by concurrently transmitting local gradients via AirComp. We characterize the convergence of the proposed FL by deriving an upper bound of the expected optimality gap, revealing that the convergence depends on the accumulated errors due to partial device participation and model distortion, both of which further depend on dynamic energy levels. To accelerate the convergence, we formulate a joint AirComp transceiver design and device scheduling problem, which is then tackled by developing an efficient Lyapunov-based online optimization algorithm. Simulations demonstrate that, by appropriately scheduling devices and allocating energy across multiple communication rounds, our proposed algorithm achieves a much better learning performance than benchmarks.
Kokoelmat
- Avoin saatavuus [34357]