MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised Learning
Lian, Zheng; Sun, Haiyang; Sun, Licai; Chen, Kang; Xu, Mngyu; Wang, Kexin; Xu, Ke; He, Yu; Li, Ying; Zhao, Jinming; Liu, Ye; Liu, Bin; Yi, Jiangyan; Wang, Meng; Cambria, Erik; Zhao, Guoying; Schuller, Björn W.; Tao, Jianhua (2023-10-27)
Lian, Zheng
Sun, Haiyang
Sun, Licai
Chen, Kang
Xu, Mngyu
Wang, Kexin
Xu, Ke
He, Yu
Li, Ying
Zhao, Jinming
Liu, Ye
Liu, Bin
Yi, Jiangyan
Wang, Meng
Cambria, Erik
Zhao, Guoying
Schuller, Björn W.
Tao, Jianhua
ACM
27.10.2023
Lian, Z., Sun, H., Sun, L., Chen, K., Xu, M., Wang, K., Xu, K., He, Y., Li, Y., Zhao, J., Liu, Y., Liu, B., Yi, J., Wang, M., Cambria, E., Zhao, G., Schuller, B. W., & Tao, J. (2023). Mer 2023: Multi-label learning, modality robustness, and semi-supervised learning. Proceedings of the 31st ACM International Conference on Multimedia, 9610–9614. https://doi.org/10.1145/3581783.3612836
https://creativecommons.org/licenses/by/4.0/
© 2023 Copyright held by the owner/author(s). This work is licensed under a Creative Commons Attribution International 4.0 License.
https://creativecommons.org/licenses/by/4.0/
© 2023 Copyright held by the owner/author(s). This work is licensed under a Creative Commons Attribution International 4.0 License.
https://creativecommons.org/licenses/by/4.0/
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:oulu-202401181338
https://urn.fi/URN:NBN:fi:oulu-202401181338
Tiivistelmä
Abstract
The first Multimodal Emotion Recognition Challenge (MER 2023)1 was successfully held at ACM Multimedia. The challenge focuses on system robustness and consists of three distinct tracks: (1) MER-MULTI, where participants are required to recognize both discrete and dimensional emotions; (2) MER-NOISE, in which noise is added to test videos for modality robustness evaluation; (3) MER-SEMI, which provides a large amount of unlabeled samples for semi-supervised learning. In this paper, we introduce the motivation behind this challenge, describe the benchmark dataset, and provide some statistics about participants. To continue using this dataset after MER 2023, please sign a new End User License Agreement2 and send it to our official email address3. We believe this high-quality dataset can become a new benchmark in multimodal emotion recognition, especially for the Chinese research community.
The first Multimodal Emotion Recognition Challenge (MER 2023)1 was successfully held at ACM Multimedia. The challenge focuses on system robustness and consists of three distinct tracks: (1) MER-MULTI, where participants are required to recognize both discrete and dimensional emotions; (2) MER-NOISE, in which noise is added to test videos for modality robustness evaluation; (3) MER-SEMI, which provides a large amount of unlabeled samples for semi-supervised learning. In this paper, we introduce the motivation behind this challenge, describe the benchmark dataset, and provide some statistics about participants. To continue using this dataset after MER 2023, please sign a new End User License Agreement2 and send it to our official email address3. We believe this high-quality dataset can become a new benchmark in multimodal emotion recognition, especially for the Chinese research community.
Kokoelmat
- Avoin saatavuus [38840]