Mix2FLD : downlink federated learning after uplink federated distillation with two-way mixup
Oh, Seungeun; Park, Jihong; Jeong, Eunjeong; Kim, Hyesung; Bennis, Mehdi; Kim, Seong-Lyun (2020-06-19)
S. Oh, J. Park, E. Jeong, H. Kim, M. Bennis and S. -L. Kim, "Mix2FLD: Downlink Federated Learning After Uplink Federated Distillation With Two-Way Mixup," in IEEE Communications Letters, vol. 24, no. 10, pp. 2211-2215, Oct. 2020, doi: 10.1109/LCOMM.2020.3003693
© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
https://urn.fi/URN:NBN:fi-fe2020120399291
Tiivistelmä
Abstract
This letter proposes a novel communication-efficient and privacy-preserving distributed machine learning framework, coined Mix2FLD. To address uplink-downlink capacity asymmetry, local model outputs are uploaded to a server in the uplink as in federated distillation (FD), whereas global model parameters are downloaded in the downlink as in federated learning (FL). This requires a model output-to-parameter conversion at the server, after collecting additional data samples from devices. To preserve privacy while not compromising accuracy, linearly mixed-up local samples are uploaded, and inversely mixed up across different devices at the server. Numerical evaluations show that Mix2FLD achieves up to 16.7% higher test accuracy while reducing convergence time by up to 18.8% under asymmetric uplink-downlink channels compared to FL.
Kokoelmat
- Avoin saatavuus [34545]