Hyppää sisältöön
    • FI
    • ENG
  • FI
  • /
  • EN
OuluREPO – Oulun yliopiston julkaisuarkisto / University of Oulu repository
Näytä viite 
  •   OuluREPO etusivu
  • Oulun yliopisto
  • Avoin saatavuus
  • Näytä viite
  •   OuluREPO etusivu
  • Oulun yliopisto
  • Avoin saatavuus
  • Näytä viite
JavaScript is disabled for your browser. Some features of this site may not work without it.

Communication efficient framework for decentralized machine learning

Elgabli, Anis; Park, Jihong; Bedi, Amrit S.; Bennis, Mehdi; Aggarwal, Vaneet (2020-05-07)

 
Avaa tiedosto
nbnfi-fe2020100277669.pdf (936.7Kt)
nbnfi-fe2020100277669_meta.xml (37.28Kt)
nbnfi-fe2020100277669_solr.xml (27.90Kt)
Lataukset: 

URL:
https://doi.org/10.1109/CISS48834.2020.1570627384

Elgabli, Anis
Park, Jihong
Bedi, Amrit S.
Bennis, Mehdi
Aggarwal, Vaneet
Institute of Electrical and Electronics Engineers
07.05.2020

A. Elgabli, J. Park, A. S. Bedi, M. Bennis and V. Aggarwal, "Communication Efficient Framework for Decentralized Machine Learning," 2020 54th Annual Conference on Information Sciences and Systems (CISS), Princeton, NJ, USA, 2020, pp. 1-5, doi: 10.1109/CISS48834.2020.1570627384

https://rightsstatements.org/vocab/InC/1.0/
© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
doi:https://doi.org/10.1109/CISS48834.2020.1570627384
Näytä kaikki kuvailutiedot
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi-fe2020100277669
Tiivistelmä

Abstract

In this paper, we propose a fast, privacy-aware, and communication-efficient decentralized framework to solve the distributed machine learning (DML) problem. The proposed algorithm is based on the Alternating Direction Method of Multipliers (ADMM) algorithm. The key novelty in the proposed algorithm is that it solves the problem in a decentralized topology where at most half of the workers are competing the limited communication resources at any given time. Moreover, each worker exchanges the locally trained model only with two neighboring workers, thereby training a global model with a lower amount of communication overhead in each exchange. We prove that GADMM converges faster than the centralized batch gradient descent for convex loss functions, and numerically show that it converges faster and more communication-efficient than the state-of-the-art communication-efficient algorithms such as the Lazily Aggregated Gradient (LAG) and dual averaging, in linear and logistic regression tasks on synthetic and real datasets.

Kokoelmat
  • Avoin saatavuus [38840]
oulurepo@oulu.fiOulun yliopiston kirjastoOuluCRISLaturiMuuntaja
SaavutettavuusselosteTietosuojailmoitusYlläpidon kirjautuminen
 

Selaa kokoelmaa

NimekkeetTekijätJulkaisuajatAsiasanatUusimmatSivukartta

Omat tiedot

Kirjaudu sisäänRekisteröidy
oulurepo@oulu.fiOulun yliopiston kirjastoOuluCRISLaturiMuuntaja
SaavutettavuusselosteTietosuojailmoitusYlläpidon kirjautuminen