Hyppää sisältöön
    • FI
    • ENG
  • FI
  • /
  • EN
OuluREPO – Oulun yliopiston julkaisuarkisto / University of Oulu repository
Näytä viite 
  •   OuluREPO etusivu
  • Oulun yliopisto
  • Avoin saatavuus
  • Näytä viite
  •   OuluREPO etusivu
  • Oulun yliopisto
  • Avoin saatavuus
  • Näytä viite
JavaScript is disabled for your browser. Some features of this site may not work without it.

Learnable Eulerian Dynamics for Micro-Expression Action Unit Detection

Varanka, Tuomas; Peng, Wei; Zhao, Guoying (2023-04-27)

 
Avaa tiedosto
nbnfioulu-202404182837.pdf (3.528Mt)
Lataukset: 

URL:
https://doi.org/10.1007/978-3-031-31438-4_26

Varanka, Tuomas
Peng, Wei
Zhao, Guoying
Springer
27.04.2023

Varanka, T., Peng, W., Zhao, G. (2023). Learnable Eulerian Dynamics for Micro-Expression Action Unit Detection. In: Gade, R., Felsberg, M., Kämäräinen, JK. (eds) Image Analysis. SCIA 2023. Lecture Notes in Computer Science, vol 13886. Springer, Cham. https://doi.org/10.1007/978-3-031-31438-4_26

https://rightsstatements.org/vocab/InC/1.0/
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
https://rightsstatements.org/vocab/InC/1.0/
doi:https://doi.org/10.1007/978-3-031-31438-4_26
Näytä kaikki kuvailutiedot
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:oulu-202404182837
Tiivistelmä
Abstract

Micro-expressions (MEs) are subtle, quick and involuntary facial muscle movements. Action unit (AU) detection plays an important role in facial micro-expression analysis due to the ambiguity of MEs. Unlike typical AU detection that is performed on macro-expressions, the facial muscle movements are significantly more subtle in MEs. This makes the detection of AUs in MEs a difficult challenge with a limited number of previous studies. A common way to analyze subtle facial movements is to utilize the temporal changes between the sequence of frames, as subtle changes between static images are difficult to observe. Feature representations using motion magnification and optical flow are examples that can extract motion information from the temporal domain effectively. However, they are dependent on the chosen parameters and are computationally expensive.

To address these issues, we propose Learnable Eulerian Dynamics (LED), capable of extracting motion representation efficiently. Rather than magnifying the motion like Eulerian video magnification, LED only extracts it. The parameters of the motion extraction are made learnable by using automatic differentiation in conjunction with a linearized version of the Eulerian video magnification. The extracted motion features are then further refined by convolutional layers. This enables the method to fine-tune the features by end-to-end training, leading to task-specific features that enhance performance on the downstream task (Code is publicly available in www.github.com/tvaranka/led).
Kokoelmat
  • Avoin saatavuus [38618]
oulurepo@oulu.fiOulun yliopiston kirjastoOuluCRISLaturiMuuntaja
SaavutettavuusselosteTietosuojailmoitusYlläpidon kirjautuminen
 

Selaa kokoelmaa

NimekkeetTekijätJulkaisuajatAsiasanatUusimmatSivukartta

Omat tiedot

Kirjaudu sisäänRekisteröidy
oulurepo@oulu.fiOulun yliopiston kirjastoOuluCRISLaturiMuuntaja
SaavutettavuusselosteTietosuojailmoitusYlläpidon kirjautuminen