Nonlinear energy-harvesting for D2D networks underlaying UAV with SWIPT using MADQN
Ouamri, Mohamed Amine; Barb, Gordana; Singh, Daljeet; Adam, Abuzar B. M.; Muthanna, M. S. A.; Li, Xingwang (2023-05-15)
M. A. Ouamri, G. Barb, D. Singh, A. B. M. Adam, M. S. A. Muthanna and X. Li, "Nonlinear Energy-Harvesting for D2D Networks Underlaying UAV With SWIPT Using MADQN," in IEEE Communications Letters, vol. 27, no. 7, pp. 1804-1808, July 2023, doi: 10.1109/LCOMM.2023.3275989.
© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
https://urn.fi/URN:NBN:fi-fe20230907121212
Tiivistelmä
Abstract
Energy Efficiency (EE) has become an essential metric in Device-to-Device (D2D) communication underlaying Unmanned Aerial Vehicles (UAVs) Among the several technologies that provide significant energy, simultaneous wireless information and power transfer (SWIPT) has been proposed as a promising solution to improve EE. However, it is a challenging task to study the EE under nonlinear energy harvesting (EH) due to the limited sensitivity and the composition of the nonlinear circuit. Moreover, when D2D users transmit information using the EH from UAVs, interferences to cellular users occur and deteriorate the throughput. To tackle these problems, we leverage concepts from artificial intelligence (AI) to optimize EE of UAV-assisted D2D communication. Specifically, multi-agent deep reinforcement learning was proposed to jointly maximize throughput and EE, where the reward function is defined in terms of the introduced goal. Simulation results verify the supremacy of proposed approach over traditional algorithms.
Kokoelmat
- Avoin saatavuus [34516]