Hyppää sisältöön
    • FI
    • ENG
  • FI
  • /
  • EN
OuluREPO – Oulun yliopiston julkaisuarkisto / University of Oulu repository
Näytä viite 
  •   OuluREPO etusivu
  • Oulun yliopisto
  • Avoin saatavuus
  • Näytä viite
  •   OuluREPO etusivu
  • Oulun yliopisto
  • Avoin saatavuus
  • Näytä viite
JavaScript is disabled for your browser. Some features of this site may not work without it.

Vitranspad : video transformer using convolution and self-attention for face presentation attack detection

Ming, Zuheng; Yu, Zitong; Al-Ghadi, Musab; Visani, Muriel; Luqman, Muhammad Muzzamil; Burie, Jean-Christophe (2022-10-18)

 
Avaa tiedosto
nbnfi-fe2023041135862.pdf (610.6Kt)
nbnfi-fe2023041135862_meta.xml (39.13Kt)
nbnfi-fe2023041135862_solr.xml (37.48Kt)
Lataukset: 

URL:
https://doi.org/10.1109/ICIP46576.2022.9897560

Ming, Zuheng
Yu, Zitong
Al-Ghadi, Musab
Visani, Muriel
Luqman, Muhammad Muzzamil
Burie, Jean-Christophe
Institute of Electrical and Electronics Engineers
18.10.2022

Z. Ming, Z. Yu, M. Al-Ghadi, M. Visani, M. M. Luqman and J. -C. Burie, "Vitranspad: Video Transformer Using Convolution And Self-Attention For Face Presentation Attack Detection," 2022 IEEE International Conference on Image Processing (ICIP), Bordeaux, France, 2022, pp. 4248-4252, doi: 10.1109/ICIP46576.2022.9897560.

https://rightsstatements.org/vocab/InC/1.0/
© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
doi:https://doi.org/10.1109/ICIP46576.2022.9897560
Näytä kaikki kuvailutiedot
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi-fe2023041135862
Tiivistelmä

Abstract

Face Presentation Attack Detection (PAD) is an important measure to prevent spoof attacks for face biometric systems. Many works based on Convolution Neural Networks (CNNs) for face PAD formulate the problem as an image-level binary classification task without considering the context. Alternatively, Vision Transformers (ViT) using self-attention to attend the context of an image become the mainstreams in face PAD. Inspired by ViT, we propose a Video-based Transformer for face PAD (ViTransPAD) with short/long-range spatio-temporal attention which can not only focus on local details with short-range attention within a frame but also capture long-range dependencies over frames. Instead of using coarse image patches with single-scale as in ViT, we pro-pose the Multi-scale Multi-Head Self-Attention (MsMHSA) module to accommodate multi-scale patch partitions of Q, K, V feature maps to different heads on a single transformer in a coarse-to-fine manner, which enables to learn a fine-grained representation to perform pixel-level discrimination for face PAD. Due to lack inductive biases of convolutions in pure transformers, we also introduce convolutions to our ViTransPAD to integrate the desirable properties of CNNs. The extensive experiments show the effectiveness of our proposed ViTransPAD with a preferable accuracy-computation balance, which can serve as a new backbone for face PAD.

Kokoelmat
  • Avoin saatavuus [37837]
oulurepo@oulu.fiOulun yliopiston kirjastoOuluCRISLaturiMuuntaja
SaavutettavuusselosteTietosuojailmoitusYlläpidon kirjautuminen
 

Selaa kokoelmaa

NimekkeetTekijätJulkaisuajatAsiasanatUusimmatSivukartta

Omat tiedot

Kirjaudu sisäänRekisteröidy
oulurepo@oulu.fiOulun yliopiston kirjastoOuluCRISLaturiMuuntaja
SaavutettavuusselosteTietosuojailmoitusYlläpidon kirjautuminen