Optimistic Multi-Agent Policy Gradient
Zhao, Wenshuai; Zhao, Yi; Li, Zhiyuan; Kannala, Juho; Pajarinen, Joni (2024-05-02)
Zhao, Wenshuai
Zhao, Yi
Li, Zhiyuan
Kannala, Juho
Pajarinen, Joni
ML Research Press
02.05.2024
Zhao, W., Zhao, Y., Li, Z., Kannala, J., & Pajarinen, J. (2024). Optimistic Multi-Agent Policy Gradient. Proceedings of the 41st International Conference on Machine Learning. Proceedings of Machine Learning Research 235, 61186-61202.
https://creativecommons.org/licenses/by/4.0/
Copyright 2024 by the author(s). Licensed under Creative Commons Attribution 4.0 International.
https://creativecommons.org/licenses/by/4.0/
Copyright 2024 by the author(s). Licensed under Creative Commons Attribution 4.0 International.
https://creativecommons.org/licenses/by/4.0/
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:oulu-202409246032
https://urn.fi/URN:NBN:fi:oulu-202409246032
Tiivistelmä
Abstract
Relative overgeneralization (RO) occurs in cooperative multi-agent learning tasks when agents converge towards a suboptimal joint policy due to overfitting to suboptimal behaviors of other agents. No methods have been proposed for addressing RO in multi-agent policy gradient (MAPG) methods although these methods produce state-of-the-art results. To address this gap, we propose a general, yet simple, framework to enable optimistic updates in MAPG methods that alleviate the RO problem. Our approach involves clipping the advantage to eliminate negative values, thereby facilitating optimistic updates in MAPG. The optimism prevents individual agents from quickly converging to a local optimum. Additionally, we provide a formal analysis to show that the proposed method retains optimality at a fixed point. In extensive evaluations on a diverse set of tasks including the Multi-agent MuJoCo and Overcooked benchmarks, our method outperforms strong baselines on 13 out of 19 tested tasks and matches the performance on the rest.
Relative overgeneralization (RO) occurs in cooperative multi-agent learning tasks when agents converge towards a suboptimal joint policy due to overfitting to suboptimal behaviors of other agents. No methods have been proposed for addressing RO in multi-agent policy gradient (MAPG) methods although these methods produce state-of-the-art results. To address this gap, we propose a general, yet simple, framework to enable optimistic updates in MAPG methods that alleviate the RO problem. Our approach involves clipping the advantage to eliminate negative values, thereby facilitating optimistic updates in MAPG. The optimism prevents individual agents from quickly converging to a local optimum. Additionally, we provide a formal analysis to show that the proposed method retains optimality at a fixed point. In extensive evaluations on a diverse set of tasks including the Multi-agent MuJoCo and Overcooked benchmarks, our method outperforms strong baselines on 13 out of 19 tested tasks and matches the performance on the rest.
Kokoelmat
- Avoin saatavuus [38830]