Hyppää sisältöön
    • FI
    • ENG
  • FI
  • /
  • EN
OuluREPO – Oulun yliopiston julkaisuarkisto / University of Oulu repository
Näytä viite 
  •   OuluREPO etusivu
  • Oulun yliopisto
  • Avoin saatavuus
  • Näytä viite
  •   OuluREPO etusivu
  • Oulun yliopisto
  • Avoin saatavuus
  • Näytä viite
JavaScript is disabled for your browser. Some features of this site may not work without it.

Cross-modal self-supervised learning for lip reading : when contrastive learning meets adversarial training

Sheng, Changchong; Pietikäinen, Matti; Tian, Qi; Liu, Li (2021-10-17)

 
Avaa tiedosto
nbnfi-fe2022030121341.pdf (1.046Mt)
nbnfi-fe2022030121341_meta.xml (36.14Kt)
nbnfi-fe2022030121341_solr.xml (30.05Kt)
Lataukset: 

URL:
https://doi.org/10.1145/3474085.3475415

Sheng, Changchong
Pietikäinen, Matti
Tian, Qi
Liu, Li
Association for Computing Machinery
17.10.2021

Changchong Sheng, Matti Pietikäinen, Qi Tian, and Li Liu. 2021. Cross-modal Self-Supervised Learning for Lip Reading: When Contrastive Learning meets Adversarial Training. Proceedings of the 29th ACM International Conference on Multimedia. Association for Computing Machinery, New York, NY, USA, 2456–2464. DOI:https://doi.org/10.1145/3474085.3475415

https://rightsstatements.org/vocab/InC/1.0/
© 2021 Association for Computing Machinery. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Proceedings of the 29th ACM International Conference on Multimedia, https://doi.org/10.1145/3474085.3475415.
https://rightsstatements.org/vocab/InC/1.0/
doi:https://doi.org/10.1145/3474085.3475415
Näytä kaikki kuvailutiedot
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi-fe2022030121341
Tiivistelmä

Abstract

The goal of this work is to learn discriminative visual representations for lip reading without access to manual text annotation. Recent advances in cross-modal self-supervised learning have shown that the corresponding audio can serve as a supervisory signal to learn effective visual representations for lip reading. However, existing methods only exploit the natural synchronization of the video and the corresponding audio. We find that both video and audio are actually composed of speech-related information, identity-related information, and modal information. To make the visual representations (i) more discriminative for lip reading and (ii) indiscriminate with respect to the identities and modals, we propose a novel self-supervised learning framework called Adversarial Dual-Contrast Self-Supervised Learning (ADC-SSL), to go beyond previous methods by explicitly forcing the visual representations disentangled from speech-unrelated information. Experimental results clearly show that the proposed method outperforms state-of-the-art cross-modal self-supervised baselines by a large margin. Besides, ADC-SSL can outperform its supervised counterpart without any finetune.

Kokoelmat
  • Avoin saatavuus [38358]
oulurepo@oulu.fiOulun yliopiston kirjastoOuluCRISLaturiMuuntaja
SaavutettavuusselosteTietosuojailmoitusYlläpidon kirjautuminen
 

Selaa kokoelmaa

NimekkeetTekijätJulkaisuajatAsiasanatUusimmatSivukartta

Omat tiedot

Kirjaudu sisäänRekisteröidy
oulurepo@oulu.fiOulun yliopiston kirjastoOuluCRISLaturiMuuntaja
SaavutettavuusselosteTietosuojailmoitusYlläpidon kirjautuminen