BayGO: Decentralized Bayesian Learning and Information-Aware Graph Optimization Framework
AlShammari, Tamara; Weeraddana, Chathuranga; Bennis, Mehdi (2024-04-10)
AlShammari, Tamara
Weeraddana, Chathuranga
Bennis, Mehdi
IEEE
10.04.2024
T. AlShammari, C. Weeraddana and M. Bennis, "BayGO: Decentralized Bayesian Learning and Information-Aware Graph Optimization Framework," in IEEE Transactions on Signal Processing, vol. 72, pp. 2101-2116, 2024, doi: 10.1109/TSP.2024.3387277
https://rightsstatements.org/vocab/InC/1.0/
© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists,or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists,or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:oulu-202409165878
https://urn.fi/URN:NBN:fi:oulu-202409165878
Tiivistelmä
Abstract
Multi-agent Decentralized Learning (MADL) is a scalable approach that enables agents to learn based on their local datasets. However, it presents significant challenges related to the impact of dataset heterogeneity and the communication graph structure on learning speed, as well as the lack of a robust method for quantifying prediction uncertainty. To address these challenges, we propose BayGO, a novel fully-decentralized multi-agent local Bayesian learning with local averaging, usually referred to as non-Bayesian social learning, together with graph optimization framework. Within BayGO, agents locally learn a posterior distribution over the model parameters, updating it locally using their datasets and sharing this information with their neighbors. We derive an aggregation rule for combining received posterior distributions to achieve optimality and consensus. Moreover, we theoretically derive the convergence rate of agents’ posterior distributions. This convergence rate accounts for both network structure and information heterogeneity among agents. To expedite learning, agents employ the derived convergence rate as an objective, optimizing it with respect to the network structure alternately with their posterior distributions. As a consequence, agents can successfully fine-tune their network connections according to the information content of their neighbors. This leads to a sparse graph configuration, where each agent communicates exclusively with the neighbor that offers the highest information gain, enhancing communication efficiency. Our simulations corroborate that the BayGO framework accelerates learning compared to fully-connected and star topologies owing to its capacity for selecting neighbors based on information gain.
Multi-agent Decentralized Learning (MADL) is a scalable approach that enables agents to learn based on their local datasets. However, it presents significant challenges related to the impact of dataset heterogeneity and the communication graph structure on learning speed, as well as the lack of a robust method for quantifying prediction uncertainty. To address these challenges, we propose BayGO, a novel fully-decentralized multi-agent local Bayesian learning with local averaging, usually referred to as non-Bayesian social learning, together with graph optimization framework. Within BayGO, agents locally learn a posterior distribution over the model parameters, updating it locally using their datasets and sharing this information with their neighbors. We derive an aggregation rule for combining received posterior distributions to achieve optimality and consensus. Moreover, we theoretically derive the convergence rate of agents’ posterior distributions. This convergence rate accounts for both network structure and information heterogeneity among agents. To expedite learning, agents employ the derived convergence rate as an objective, optimizing it with respect to the network structure alternately with their posterior distributions. As a consequence, agents can successfully fine-tune their network connections according to the information content of their neighbors. This leads to a sparse graph configuration, where each agent communicates exclusively with the neighbor that offers the highest information gain, enhancing communication efficiency. Our simulations corroborate that the BayGO framework accelerates learning compared to fully-connected and star topologies owing to its capacity for selecting neighbors based on information gain.
Kokoelmat
- Avoin saatavuus [38841]