Cooperative edge caching via federated deep reinforcement learning in fog-RANs
Zhang, Min; Jiang, Yanxiang; Zheng, Fu-Chun; Bennis, Mehdi; You, Xiaohu (2021-07-09)
M. Zhang, Y. Jiang, F. -C. Zheng, M. Bennis and X. You, "Cooperative Edge Caching via Federated Deep Reinforcement Learning in Fog-RANs," 2021 IEEE International Conference on Communications Workshops (ICC Workshops), 2021, pp. 1-6, doi: 10.1109/ICCWorkshops50388.2021.9473609
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, includingb reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
https://urn.fi/URN:NBN:fi-fe2021102151869
Tiivistelmä
Abstract
In this paper, cooperative edge caching problem is investigated in fog radio access networks (F-RANs). By considering the non-deterministic polynomial hard (NP-hard) property of this problem, a federated deep reinforcement learning (FDRL) framework is put forth to learn the content caching strategy. Then, in order to overcome the dimensionality curse of reinforcement learning and improve the overall caching performance, we propose a dueling deep Q-network based cooperative edge caching method to find the optimal caching policy in a distributed manner. Furthermore, horizontal federated learning (HFL) is applied to address issues of over-consumption of resources during distributed training and data transmission process. Compared with three classical content caching methods and two reinforcement learning algorithms, simulation results show the superiority of our proposed method in reducing the content request delay and improving the cache hit rate.
Kokoelmat
- Avoin saatavuus [34540]