Exploring the potential of immersive data visualization and generative AI–driven forecasting in extended reality (XR)
Kaluarachchi Thennakoon Appuhamilage, Nirasha (2025-06-12)
Kaluarachchi Thennakoon Appuhamilage, Nirasha
N. Kaluarachchi Thennakoon Appuhamilage
12.06.2025
© 2025 Nirasha Kaluarachchi Thennakoon Appuhamilage. Ellei toisin mainita, uudelleenkäyttö on sallittu Creative Commons Attribution 4.0 International (CC-BY 4.0) -lisenssillä (https://creativecommons.org/licenses/by/4.0/). Uudelleenkäyttö on sallittua edellyttäen, että lähde mainitaan asianmukaisesti ja mahdolliset muutokset merkitään. Sellaisten osien käyttö tai jäljentäminen, jotka eivät ole tekijän tai tekijöiden omaisuutta, saattaa edellyttää lupaa suoraan asianomaisilta oikeudenhaltijoilta.
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:oulu-202506124424
https://urn.fi/URN:NBN:fi:oulu-202506124424
Tiivistelmä
Artificial intelligence (AI) and extended reality (XR) technologies are converging to enable immersive, predictive, and interactive data experiences. This thesis explores how predictive AI models, specifically designed for time-series forecasting, can be integrated with immersive XR systems to visualize real-time and future environmental conditions. The study focuses on CO2 monitoring in indoor spaces, aiming to improve spatial awareness and decision-making through generative data visualizations.
To explore this, a real-time immersive system was developed using Unreal Engine 5, a Flask backend, and the Smart Campus API. The system visualizes both live CO2 sensor data and predicted values generated by deep learning models (LSTM, GRU, and Transformer). A modular architecture was implemented, enabling seamless data acquisition, AI inference, and VR rendering. Performance was evaluated using quantitative metrics such as frame rate, stale frames, system latency, and model accuracy.
The GRU model emerged as the best-performing predictor, achieving an RMSE of 16.22 and R2 of 0.9847. However, latency analysis revealed that AI inference introduced notable delays (up to 8.5 seconds), indicating room for optimization. Despite this, the system maintained stable frame rates (above 40 FPS) and responsive rendering in VR. Evaluation results show that integrating predictive intelligence into XR is technically feasible and offers new potential for spatially embedded analytics.
This thesis provides practical insights for designing AI-driven XR environments. The results inform future applications in smart buildings, digital twins, and environmental awareness systems, emphasizing the importance of modularity, model efficiency, and user-centered visualization design.
To explore this, a real-time immersive system was developed using Unreal Engine 5, a Flask backend, and the Smart Campus API. The system visualizes both live CO2 sensor data and predicted values generated by deep learning models (LSTM, GRU, and Transformer). A modular architecture was implemented, enabling seamless data acquisition, AI inference, and VR rendering. Performance was evaluated using quantitative metrics such as frame rate, stale frames, system latency, and model accuracy.
The GRU model emerged as the best-performing predictor, achieving an RMSE of 16.22 and R2 of 0.9847. However, latency analysis revealed that AI inference introduced notable delays (up to 8.5 seconds), indicating room for optimization. Despite this, the system maintained stable frame rates (above 40 FPS) and responsive rendering in VR. Evaluation results show that integrating predictive intelligence into XR is technically feasible and offers new potential for spatially embedded analytics.
This thesis provides practical insights for designing AI-driven XR environments. The results inform future applications in smart buildings, digital twins, and environmental awareness systems, emphasizing the importance of modularity, model efficiency, and user-centered visualization design.
Kokoelmat
- Avoin saatavuus [38841]