Multi-tenant cross-slice resource orchestration: a deep reinforcement learning approach
Chen, Xianfu; Zhao, Zhifeng; Wu, Celimuge; Bennis, Mehdi; Liu, Hang; Ji, Yusheng; Zhang, Honggang (2019-08-08)
X. Chen et al., "Multi-Tenant Cross-Slice Resource Orchestration: A Deep Reinforcement Learning Approach," in IEEE Journal on Selected Areas in Communications, vol. 37, no. 10, pp. 2377-2392, Oct. 2019, https://doi.org/10.1109/JSAC.2019.2933893
© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
With the cellular networks becoming increasingly agile, a major challenge lies in how to support diverse services for mobile users (MUs) over a common physical network infrastructure. Network slicing is a promising solution to tailor the network to match such service requests. This paper considers a system with radio access network (RAN)-only slicing, where the physical infrastructure is split into slices providing computation and communication functionalities. A limited number of channels are auctioned across scheduling slots to MUs of multiple service providers (SPs) (i.e., the tenants). Each SP behaves selfishly to maximize the expected long-term payoff from the competition with other SPs for the orchestration of channels, which provides its MUs with the opportunities to access the computation and communication slices. This problem is modelled as a stochastic game, in which the decision makings of a SP depend on the global network dynamics as well as the joint control policy of all SPs. To approximate the Nash equilibrium solutions, we first construct an abstract stochastic game with the local conjectures of channel auction among the SPs. We then linearly decompose the per-SP Markov decision process to simplify the decision makings at a SP and derive an online scheme based on deep reinforcement learning to approach the optimal abstract control policies. Numerical experiments show significant performance gains from our scheme.
- Avoin saatavuus