Wireless resource scheduling in virtualized radio access networks using stochastic learning
Chen, Xianfu; Han, Zhu; Zhang, Honggang; Xue, Guoliang; Xiao, Yong; Bennis, Mehdi (2017-08-22)
X. Chen, Z. Han, H. Zhang, G. Xue, Y. Xiao and M. Bennis, "Wireless Resource Scheduling in Virtualized Radio Access Networks Using Stochastic Learning," in IEEE Transactions on Mobile Computing, vol. 17, no. 4, pp. 961-974, 1 April 2018. doi: 10.1109/TMC.2017.2742949
© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
https://urn.fi/URN:NBN:fi-fe2018080833508
Tiivistelmä
Abstract
How to allocate the limited wireless resource in dense radio access networks (RANs) remains challenging. By leveraging a software-defined control plane, the independent base stations (BSs) are virtualized as a centralized network controller (CNC). Such virtualization decouples the CNC from the wireless service providers (WSPs). We investigate a virtualized RAN, where the CNC auctions channels at the beginning of scheduling slots to the mobile terminals (MTs) based on bids from their subscribing WSPs. Each WSP aims at maximizing the expected long-term payoff from bidding channels to satisfy the MTs for transmitting packets. We formulate the problem as a stochastic game, where the channel auction and packet scheduling decisions of a WSP depend on the state of network and the control policies of its competitors. To approach the equilibrium solution, an abstract stochastic game is proposed with bounded regret. The decision making process of each WSP is modeled as a Markov decision process (MDP). To address the signalling overhead and computational complexity issues, we decompose the MDP into a series of single-agent MDPs with reduced state spaces, and derive an online localized algorithm to learn the state value functions. Our results show significant performance improvements in terms of per-MT average utility.
Kokoelmat
- Avoin saatavuus [29998]