Dynamic Hierarchical Reinforcement Learning Framework for Energy-Efficient 5G Base Stations in Urban Environments
Xu, Dianlei; Su, Xiang; Premsankar, Gopika; Wang, Huandong; Tarkoma, Sasu; Hui, Pan (2025-04-02)
Xu, Dianlei
Su, Xiang
Premsankar, Gopika
Wang, Huandong
Tarkoma, Sasu
Hui, Pan
IEEE
02.04.2025
D. Xu, X. Su, G. Premsankar, H. Wang, S. Tarkoma and P. Hui, "Dynamic Hierarchical Reinforcement Learning Framework for Energy-Efficient 5G Base Stations in Urban Environments," in IEEE Transactions on Mobile Computing, doi: 10.1109/TMC.2025.3557280.
https://creativecommons.org/licenses/by/4.0/
© The Author(s) 2025. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
https://creativecommons.org/licenses/by/4.0/
© The Author(s) 2025. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
https://creativecommons.org/licenses/by/4.0/
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:oulu-202504102521
https://urn.fi/URN:NBN:fi:oulu-202504102521
Tiivistelmä
Abstract:
The energy consumption of 5G base stations (BSs) is significantly higher than that of 4G BSs, creating challenges for operators due to increased costs and carbon emissions. Existing solutions address this issue by switching off BSs during specific periods or forming cooperation coalitions where some BSs deactivate while others serve users. However, these approaches often rely on fixed geographic configurations, making them unsuitable for urban areas with numerous BSs and mobile users. To tackle these challenges, we propose a hierarchical reinforcement learning (RL) framework for energy conservation in large-scale 5G networks. In the upper-layer, we propose a deep Q-network integrated with a graph convolutional network that dynamically groups BSs into coalitions from a macro perspective. This layer focuses on high-level coalition formation to optimize system-wide energy efficiency by considering the global state of the network. In the lower-layer, we combine attention mechanism with multi-agent RL and graph convolutional networks to design a scalable algorithm that maximizes local energy efficiency through optimizing the cooperation within each coalition. These two layers align global coalition dynamics with local intra-coalition cooperation to achieve system-wide energy optimization. Moreover, we accurately model large-scale urban 5G scenarios leveraging a high-fidelity network simulator, which enables our RL framework to learn from real-world feedback. Extensive experiments conducted with the simulator demonstrate that our proposed framework achieves remarkable energy savings of up to 75.6%, significantly outperforming baseline approaches. These findings highlight the effectiveness and superiority of our hierarchical RL optimization framework in addressing the energy consumption challenges faced by large-scale 5G networks.
The energy consumption of 5G base stations (BSs) is significantly higher than that of 4G BSs, creating challenges for operators due to increased costs and carbon emissions. Existing solutions address this issue by switching off BSs during specific periods or forming cooperation coalitions where some BSs deactivate while others serve users. However, these approaches often rely on fixed geographic configurations, making them unsuitable for urban areas with numerous BSs and mobile users. To tackle these challenges, we propose a hierarchical reinforcement learning (RL) framework for energy conservation in large-scale 5G networks. In the upper-layer, we propose a deep Q-network integrated with a graph convolutional network that dynamically groups BSs into coalitions from a macro perspective. This layer focuses on high-level coalition formation to optimize system-wide energy efficiency by considering the global state of the network. In the lower-layer, we combine attention mechanism with multi-agent RL and graph convolutional networks to design a scalable algorithm that maximizes local energy efficiency through optimizing the cooperation within each coalition. These two layers align global coalition dynamics with local intra-coalition cooperation to achieve system-wide energy optimization. Moreover, we accurately model large-scale urban 5G scenarios leveraging a high-fidelity network simulator, which enables our RL framework to learn from real-world feedback. Extensive experiments conducted with the simulator demonstrate that our proposed framework achieves remarkable energy savings of up to 75.6%, significantly outperforming baseline approaches. These findings highlight the effectiveness and superiority of our hierarchical RL optimization framework in addressing the energy consumption challenges faced by large-scale 5G networks.
Kokoelmat
- Avoin saatavuus [38841]