Learning visual and textual representations for multimodal matching and classification
Liu, Yu; Liu, Li; Guo, Yanming; Lew, Michael S. (2018-07-02)
Yu Liu, Li Liu, Yanming Guo, Michael S. Lew. Learning visual and textual representations for multimodal matching and classification. Pattern Recognition, Volume 84, 2018, Pages 51-67, ISSN 0031-3203. https://doi.org/10.1016/j.patcog.2018.07.001
© 2018 Published by Elsevier Ltd. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/
Multimodal learning has been an important and challenging problem for decades, which aims to bridge the modality gap between heterogeneous representations, such as vision and language. Unlike many current approaches which only focus on either multimodal matching or classification, we propose a unified network to jointly learn multimodal matching and classification (MMC-Net) between images and texts. The proposed MMC-Net model can seamlessly integrate the matching and classification components. It first learns visual and textual embedding features in the matching component, and then generates discriminative multimodal representations in the classification component. Combining the two components in a unified model can help in improving their performance. Moreover, we present a multi-stage training algorithm by minimizing both of the matching and classification loss functions. Experimental results on four well-known multimodal benchmarks demonstrate the effectiveness and efficiency of the proposed approach, which achieves competitive performance for multimodal matching and classification compared to state-of-the-art approaches.
- Avoin saatavuus