Hyppää sisältöön
    • FI
    • ENG
  • FI
  • /
  • EN
OuluREPO – Oulun yliopiston julkaisuarkisto / University of Oulu repository
Näytä viite 
  •   OuluREPO etusivu
  • Oulun yliopisto
  • Avoin saatavuus
  • Näytä viite
  •   OuluREPO etusivu
  • Oulun yliopisto
  • Avoin saatavuus
  • Näytä viite
JavaScript is disabled for your browser. Some features of this site may not work without it.

Proxy experience replay : federated distillation for distributed reinforcement learning

Cha, Han; Park, Jihong; Kim, Hyesung; Bennis, Mehdi; Kim, Seong-Lyun (2020-05-15)

 
Avaa tiedosto
nbnfi-fe2020101684236.pdf (515.7Kt)
nbnfi-fe2020101684236_meta.xml (35.80Kt)
nbnfi-fe2020101684236_solr.xml (31.63Kt)
Lataukset: 

URL:
https://doi.org/10.1109/MIS.2020.2994942

Cha, Han
Park, Jihong
Kim, Hyesung
Bennis, Mehdi
Kim, Seong-Lyun
Institute of Electrical and Electronics Engineers
15.05.2020

H. Cha, J. Park, H. Kim, M. Bennis and S. -L. Kim, "Proxy Experience Replay: Federated Distillation for Distributed Reinforcement Learning," in IEEE Intelligent Systems, vol. 35, no. 4, pp. 94-101, 1 July-Aug. 2020, doi: 10.1109/MIS.2020.2994942

https://rightsstatements.org/vocab/InC/1.0/
© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
doi:https://doi.org/10.1109/MIS.2020.2994942
Näytä kaikki kuvailutiedot
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi-fe2020101684236
Tiivistelmä

Abstract

Traditional distributed deep reinforcement learning (RL) commonly relies on exchanging the experience replay memory (RM) of each agent. Since the RM contains all state observations and action policy history, it may incur huge communication overhead while violating the privacy of each agent. Alternatively, this article presents a communication-efficient and privacy-preserving distributed RL framework, coined federated reinforcement distillation (FRD). In FRD, each agent exchanges its proxy experience RM (ProxRM), in which policies are locally averaged with respect to proxy states clustering actual states. To provide FRD design insights, we present ablation studies on the impact of ProxRM structures, neural network architectures, and communication intervals. Furthermore, we propose an improved version of FRD, coined mixup augmented FRD (MixFRD), in which ProxRM is interpolated using the mixup data augmentation algorithm. Simulations in a Cartpole environment validate the effectiveness of MixFRD in reducing the variance of mission completion time and communication cost, compared to the benchmark schemes, vanilla FRD, federated RL (FRL), and policy distillation.

Kokoelmat
  • Avoin saatavuus [37744]
oulurepo@oulu.fiOulun yliopiston kirjastoOuluCRISLaturiMuuntaja
SaavutettavuusselosteTietosuojailmoitusYlläpidon kirjautuminen
 

Selaa kokoelmaa

NimekkeetTekijätJulkaisuajatAsiasanatUusimmatSivukartta

Omat tiedot

Kirjaudu sisäänRekisteröidy
oulurepo@oulu.fiOulun yliopiston kirjastoOuluCRISLaturiMuuntaja
SaavutettavuusselosteTietosuojailmoitusYlläpidon kirjautuminen