Joint Content Update and Transmission Resource Allocation for Energy-Efficient Edge Caching of High Definition Map
Hong, Gaofeng; Yang, Bin; Su, Wei; Li, Haoru; Huang, Zekai; Taleb, Tarik (2023-11-29)
Hong, Gaofeng
Yang, Bin
Su, Wei
Li, Haoru
Huang, Zekai
Taleb, Tarik
IEEE
29.11.2023
G. Hong, B. Yang, W. Su, H. Li, Z. Huang and T. Taleb, "Joint Content Update and Transmission Resource Allocation for Energy-Efficient Edge Caching of High Definition Map," in IEEE Transactions on Vehicular Technology, vol. 73, no. 4, pp. 5902-5914, April 2024, doi: 10.1109/TVT.2023.3336824.
https://creativecommons.org/licenses/by/4.0/
© 2024 The Authors. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0.
https://creativecommons.org/licenses/by/4.0/
© 2024 The Authors. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0.
https://creativecommons.org/licenses/by/4.0/
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:oulu-202403212355
https://urn.fi/URN:NBN:fi:oulu-202403212355
Tiivistelmä
Abstract
Caching the high definition map (HDM) on the edge network can significantly alleviate energy consumption of the roadside sensors frequently conducting the operators of the traffic content updating and transmission, and such operators have also an important impact on the freshness of the received content at each vehicle. This paper aims to minimize the energy consumption of the roadside sensors and satisfy the requirement of vehicles for the HDM content freshness by jointly scheduling the edge content updating and the downlink transmission resource allocation of the Road Side Unit (RSU). To this end, we propose a deep reinforcement learning based algorithm, namely the prioritized double deep R-Learning Networking (PRD-DRN). Under this PRD-DRN algorithm, the content update and transmission resource allocation are modeled as a Markov Decision Process (MDP). We take full advantage of deep R-learning and prioritized experience sampling to obtain the optimal decision, which achieves the minimization of the long-term average cost related to the content freshness and energy consumption. Extensive simulation results are conducted to verify the effectiveness of our proposed PRD-DRN algorithm, and also to illustrate the advantage of our algorithm on improving the content freshness and energy consumption compared with the baseline policies.
Caching the high definition map (HDM) on the edge network can significantly alleviate energy consumption of the roadside sensors frequently conducting the operators of the traffic content updating and transmission, and such operators have also an important impact on the freshness of the received content at each vehicle. This paper aims to minimize the energy consumption of the roadside sensors and satisfy the requirement of vehicles for the HDM content freshness by jointly scheduling the edge content updating and the downlink transmission resource allocation of the Road Side Unit (RSU). To this end, we propose a deep reinforcement learning based algorithm, namely the prioritized double deep R-Learning Networking (PRD-DRN). Under this PRD-DRN algorithm, the content update and transmission resource allocation are modeled as a Markov Decision Process (MDP). We take full advantage of deep R-learning and prioritized experience sampling to obtain the optimal decision, which achieves the minimization of the long-term average cost related to the content freshness and energy consumption. Extensive simulation results are conducted to verify the effectiveness of our proposed PRD-DRN algorithm, and also to illustrate the advantage of our algorithm on improving the content freshness and energy consumption compared with the baseline policies.
Kokoelmat
- Avoin saatavuus [34340]