Hyppää sisältöön
    • FI
    • ENG
  • FI
  • /
  • EN
OuluREPO – Oulun yliopiston julkaisuarkisto / University of Oulu repository
Näytä viite 
  •   OuluREPO etusivu
  • Oulun yliopisto
  • Avoin saatavuus
  • Näytä viite
  •   OuluREPO etusivu
  • Oulun yliopisto
  • Avoin saatavuus
  • Näytä viite
JavaScript is disabled for your browser. Some features of this site may not work without it.

Adaptive semantic-spatio-temporal graph convolutional network for lip reading

Sheng, Changchong; Zhu, Xinzhong; Xu, Huiying; Pietikäinen, Matti; Liu, Li (2021-08-16)

 
Avaa tiedosto
nbnfi-fe2022100661272.pdf (2.794Mt)
nbnfi-fe2022100661272_meta.xml (38.75Kt)
nbnfi-fe2022100661272_solr.xml (34.65Kt)
Lataukset: 

URL:
https://doi.org/10.1109/tmm.2021.3102433

Sheng, Changchong
Zhu, Xinzhong
Xu, Huiying
Pietikäinen, Matti
Liu, Li
Institute of Electrical and Electronics Engineers
16.08.2021

C. Sheng, X. Zhu, H. Xu, M. Pietikäinen and L. Liu, "Adaptive Semantic-Spatio-Temporal Graph Convolutional Network for Lip Reading," in IEEE Transactions on Multimedia, vol. 24, pp. 3545-3557, 2022, doi: 10.1109/TMM.2021.3102433

https://rightsstatements.org/vocab/InC/1.0/
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
doi:https://doi.org/10.1109/tmm.2021.3102433
Näytä kaikki kuvailutiedot
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi-fe2022100661272
Tiivistelmä

Abstract

The goal of this work is to recognize words, phrases, and sentences being spoken by a talking face without given the audio. Current deep learning approaches for lip reading focus on exploring the appearance and optical flow information of videos. However, these methods do not fully exploit the characteristics of lip motion. In addition to appearance and optical flow, the mouth contour deformation usually conveys significant information that is complementary to others. However, the modeling of dynamic mouth contour has received little attention than that of appearance and optical flow. In this work, we propose a novel model of dynamic mouth contours called Adaptive Semantic-Spatio-Temporal Graph Convolution Network (ASST-GCN), to go beyond previous methods by automatically learning both the spatial and temporal information from videos. To combine the complementary information from appearance and mouth contour, a two-stream visual front-end network is proposed. Experimental results demonstrate that the proposed method significantly outperforms the state-of-the-art lip reading methods on several large-scale lip reading benchmarks.

Kokoelmat
  • Avoin saatavuus [38841]
oulurepo@oulu.fiOulun yliopiston kirjastoOuluCRISLaturiMuuntaja
SaavutettavuusselosteTietosuojailmoitusYlläpidon kirjautuminen
 

Selaa kokoelmaa

NimekkeetTekijätJulkaisuajatAsiasanatUusimmatSivukartta

Omat tiedot

Kirjaudu sisäänRekisteröidy
oulurepo@oulu.fiOulun yliopiston kirjastoOuluCRISLaturiMuuntaja
SaavutettavuusselosteTietosuojailmoitusYlläpidon kirjautuminen