Machine learning approach for predicting the arrival of downlink control information in 5G
Tölli, Lassi (2024-03-15)
Tölli, Lassi
L. Tölli
15.03.2024
© 2024 Lassi Tölli. Ellei toisin mainita, uudelleenkäyttö on sallittu Creative Commons Attribution 4.0 International (CC-BY 4.0) -lisenssillä (https://creativecommons.org/licenses/by/4.0/). Uudelleenkäyttö on sallittua edellyttäen, että lähde mainitaan asianmukaisesti ja mahdolliset muutokset merkitään. Sellaisten osien käyttö tai jäljentäminen, jotka eivät ole tekijän tai tekijöiden omaisuutta, saattaa edellyttää lupaa suoraan asianomaisilta oikeudenhaltijoilta.
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:oulu-202403152253
https://urn.fi/URN:NBN:fi:oulu-202403152253
Tiivistelmä
Over the past decade, remarkable advancements have been achieved in the domain of wireless communications leading to a widespread implementation of 5G technology in various critical applications. Simultaneously, the world of AI has been advancing at an incredibly rapid pace. The power of AI is used in multiple applications varying from healthcare and E-commerce to people's personal devices such as smartphones. It comes as no surprise that the usage of AI could be highly beneficial to 5G as well. Modern applications of 5G entail strict requirements for attributes such as the latency and reliability of the connection and it has been speculated that AI will play a crucial role in meeting these increasingly demanding requirements.
The purpose of this thesis is to investigate a potential application of AI, more specifically its subset machine learning, in the realm of 5G resource management. This thesis presents a machine learning approach designed to enhance the procedure of receiving downlink control information messages in 5G by predicting the correct time slots for their reception. This procedure holds great importance, as successfully receiving this message is a prerequisite for allocating resources needed for sending or receiving any real data with user equipment, such as smartphones. Correctly predicting future timings of the downlink control information would enable more optimized decision-making at the current time leading to a better overall performance of the 5G network.
To investigate this topic, an experiment is conducted to compare the performances of various machine learning models in addressing this specific problem. The goal of the experiment is to find out, how accurately these models can predict the arrival of the downlink control information for individual time slots. The data for the experiment is collected from various field tests conducted all around the world, representing authentic usage scenarios of 5G. In addition to evaluating the accuracies of the models, they will be also compared from a perspective of how feasible they would be for this particular application.
The results from the experiment highlight the great potential machine learning has for improving the performance of 5G networks. The models included in the experiment managed to predict the correct time slots for the downlink control information with quite high accuracy. While it is evident that further optimizations to the models are needed to meet the strict requirements modern 5G applications have, this thesis demonstrates that the current procedures used in the 5G could certainly benefit from the predictive power possessed by these machine learning models.
The purpose of this thesis is to investigate a potential application of AI, more specifically its subset machine learning, in the realm of 5G resource management. This thesis presents a machine learning approach designed to enhance the procedure of receiving downlink control information messages in 5G by predicting the correct time slots for their reception. This procedure holds great importance, as successfully receiving this message is a prerequisite for allocating resources needed for sending or receiving any real data with user equipment, such as smartphones. Correctly predicting future timings of the downlink control information would enable more optimized decision-making at the current time leading to a better overall performance of the 5G network.
To investigate this topic, an experiment is conducted to compare the performances of various machine learning models in addressing this specific problem. The goal of the experiment is to find out, how accurately these models can predict the arrival of the downlink control information for individual time slots. The data for the experiment is collected from various field tests conducted all around the world, representing authentic usage scenarios of 5G. In addition to evaluating the accuracies of the models, they will be also compared from a perspective of how feasible they would be for this particular application.
The results from the experiment highlight the great potential machine learning has for improving the performance of 5G networks. The models included in the experiment managed to predict the correct time slots for the downlink control information with quite high accuracy. While it is evident that further optimizations to the models are needed to meet the strict requirements modern 5G applications have, this thesis demonstrates that the current procedures used in the 5G could certainly benefit from the predictive power possessed by these machine learning models.
Kokoelmat
- Avoin saatavuus [37688]