Parallel Adaptive Stochastic Gradient Descent Algorithms for Latent Factor Analysis of High-Dimensional and Incomplete Industrial Data
Qin, Wen; Luo, Xin; Li, Shuai; Zhou, MengChu (2023-06-01)
Qin, Wen
Luo, Xin
Li, Shuai
Zhou, MengChu
IEEE
01.06.2023
W. Qin, X. Luo, S. Li and M. Zhou, "Parallel Adaptive Stochastic Gradient Descent Algorithms for Latent Factor Analysis of High-Dimensional and Incomplete Industrial Data," in IEEE Transactions on Automation Science and Engineering, vol. 21, no. 3, pp. 2716-2729, July 2024, doi: 10.1109/TASE.2023.3267609.
https://rightsstatements.org/vocab/InC/1.0/
© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists,or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists,or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:oulu-202403152237
https://urn.fi/URN:NBN:fi:oulu-202403152237
Tiivistelmä
Abstract
Latent factor analysis (LFA) is efficient in knowledge discovery from a high-dimensional and incomplete (HDI) matrix frequently encountered in industrial big data-related applications. A stochastic gradient descent (SGD) algorithm is commonly adopted as a learning algorithm for LFA owing to its high efficiency. However, its sequential nature makes it less scalable when processing large-scale data. Although alternating SGD decouples an LFA process to achieve parallelization, its performance relies on its hyper-parameters that are highly expensive to tune. To address this issue, this paper presents three extended alternating SGD algorithms whose hyper-parameters are made adaptive through particle swarm optimization. Correspondingly, three Parallel Adaptive LFA (PAL) models are proposed and achieve highly efficient latent factor acquisition from an HDI matrix. Experiments have been conducted on four HDI matrices collected from industrial applications, and the benchmark models are LFA models based on state-of-the-art parallel SGD algorithms including the alternative SGD, Hogwild!, distributed gradient descent, and sparse matrix factorization parallelization. The results demonstrate that compared with the benchmarks, with 32 threads, the proposed PAL models achieve much speedup gain. They achieve the highest prediction accuracy for missing data on most cases. Note to Practitioners —HDI data are commonly encountered in many industrial big data-related applications, where rich knowledge and patterns can be extracted efficiently. An SGD based-LFA model is popular in addressing HDI data due to its efficiency. Yet when dealing with large-scale HDI data, its serial nature greatly reduces its scalability. Although alternating SGD can decouple an LFA process to implement parallelization, its performance depends on its hyper-parameter whose tuning is tedious. To address this vital issue, this study proposes three extended alternating SGD algorithms whose hyper-parameters are made via through a particle swarm optimizer. Based on them, three models are realized, which are able to efficiently obtain latent factors from HDI matrices. Compared with the existing and state-of-the-art models, they enjoy their hyper-parameter-adaptive learning process, as well as highly competitive computational efficiency and representation learning ability. Hence, they provide practitioners with more scalable solutions when addressing large HDI data from industrial applications.
Latent factor analysis (LFA) is efficient in knowledge discovery from a high-dimensional and incomplete (HDI) matrix frequently encountered in industrial big data-related applications. A stochastic gradient descent (SGD) algorithm is commonly adopted as a learning algorithm for LFA owing to its high efficiency. However, its sequential nature makes it less scalable when processing large-scale data. Although alternating SGD decouples an LFA process to achieve parallelization, its performance relies on its hyper-parameters that are highly expensive to tune. To address this issue, this paper presents three extended alternating SGD algorithms whose hyper-parameters are made adaptive through particle swarm optimization. Correspondingly, three Parallel Adaptive LFA (PAL) models are proposed and achieve highly efficient latent factor acquisition from an HDI matrix. Experiments have been conducted on four HDI matrices collected from industrial applications, and the benchmark models are LFA models based on state-of-the-art parallel SGD algorithms including the alternative SGD, Hogwild!, distributed gradient descent, and sparse matrix factorization parallelization. The results demonstrate that compared with the benchmarks, with 32 threads, the proposed PAL models achieve much speedup gain. They achieve the highest prediction accuracy for missing data on most cases. Note to Practitioners —HDI data are commonly encountered in many industrial big data-related applications, where rich knowledge and patterns can be extracted efficiently. An SGD based-LFA model is popular in addressing HDI data due to its efficiency. Yet when dealing with large-scale HDI data, its serial nature greatly reduces its scalability. Although alternating SGD can decouple an LFA process to implement parallelization, its performance depends on its hyper-parameter whose tuning is tedious. To address this vital issue, this study proposes three extended alternating SGD algorithms whose hyper-parameters are made via through a particle swarm optimizer. Based on them, three models are realized, which are able to efficiently obtain latent factors from HDI matrices. Compared with the existing and state-of-the-art models, they enjoy their hyper-parameter-adaptive learning process, as well as highly competitive computational efficiency and representation learning ability. Hence, they provide practitioners with more scalable solutions when addressing large HDI data from industrial applications.
Kokoelmat
- Avoin saatavuus [34512]