Adaptive modality distillation for separable multimodal sentiment analysis
Peng, Wei; Hong, Xiaopeng; Zhao, Guoying (2021-02-09)
W. Peng, X. Hong and G. Zhao, "Adaptive Modality Distillation for Separable Multimodal Sentiment Analysis," in IEEE Intelligent Systems, vol. 36, no. 3, pp. 82-89, 1 May-June 2021, doi: 10.1109/MIS.2021.3057757
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists,or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
https://urn.fi/URN:NBN:fi-fe202104099805
Tiivistelmä
Abstract
Multimodal sentiment analysis has increasingly attracted attention since with the arrival of complementary data streams, it has great potential to improve and go beyond unimodal sentiment analysis. In this paper, we present an efficient separable multimodal learning method to deal with the tasks with modality missing issue. In this method, the multimodal tensor is utilized to guide the evolution of each separated modality representation. To save the computational expense, Tucker decomposition is introduced, which leads to a general extension of the low-rank tensor fusion method with more modality interactions. The method, in turn, enhances our modality distillation processing. Comprehensive experiments on three popular multimodal sentiment analysis datasets, CMU-MOSI, POM, and IEMOCAP, show a superior performance especially when only partial modalities are available.
Kokoelmat
- Avoin saatavuus [37125]