Hyppää sisältöön
    • FI
    • ENG
  • FI
  • /
  • EN
OuluREPO – Oulun yliopiston julkaisuarkisto / University of Oulu repository
Näytä viite 
  •   OuluREPO etusivu
  • Oulun yliopisto
  • Avoin saatavuus
  • Näytä viite
  •   OuluREPO etusivu
  • Oulun yliopisto
  • Avoin saatavuus
  • Näytä viite
JavaScript is disabled for your browser. Some features of this site may not work without it.

Rehearsal-Free Domain Continual Face Anti-Spoofing: Generalize More and Forget Less

Cai, Rizhao; Cui, Yawen; Li, Zhi; Yu, Zitong; Li, Haoliang; Hu, Yongjian; Kot, Alex (2024-01-15)

 
Avaa tiedosto
nbnfioulu-202410306514.pdf (7.858Mt)
Lataukset: 

URL:
https://doi.org/10.1109/ICCV51070.2023.00738

Cai, Rizhao
Cui, Yawen
Li, Zhi
Yu, Zitong
Li, Haoliang
Hu, Yongjian
Kot, Alex
IEEE
15.01.2024

R. Cai et al., "Rehearsal-Free Domain Continual Face Anti-Spoofing: Generalize More and Forget Less," 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2023, pp. 8003-8014, doi: 10.1109/ICCV51070.2023.00738.

https://rightsstatements.org/vocab/InC/1.0/
© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
doi:https://doi.org/10.1109/iccv51070.2023.00738
Näytä kaikki kuvailutiedot
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:oulu-202410306514
Tiivistelmä
Abstract

Face Anti-Spoofing (FAS) is recently studied under the continual learning setting, where the FAS models are expected to evolve after encountering data from new domains. However, existing methods need extra replay buffers to store previous data for rehearsal, which becomes infeasible when previous data is unavailable because of privacy issues. In this paper, we propose the first rehearsal-free method for Domain Continual Learning (DCL) of FAS, which deals with catastrophic forgetting and unseen domain generalization problems simultaneously. For better generalization to unseen domains, we design the Dynamic Central Difference Convolutional Adapter (DCDCA) to adapt Vision Transformer (ViT) models during the continual learning sessions. To alleviate the forgetting of previous domains without using previous data, we propose the Proxy Prototype Contrastive Regularization (PPCR) to constrain the continual learning with previous domain knowledge from the proxy prototypes. Simulating practical DCL scenarios, we devise two new protocols which evaluate both generalization and anti-forgetting performance. Extensive experimental results show that our proposed method can improve the generalization performance in unseen domains and alleviate the catastrophic forgetting of previous knowledge. The code and protocol files are released on https://github.com/RizhaoCai/DCL-FAS-ICCV2023.
Kokoelmat
  • Avoin saatavuus [38865]
oulurepo@oulu.fiOulun yliopiston kirjastoOuluCRISLaturiMuuntaja
SaavutettavuusselosteTietosuojailmoitusYlläpidon kirjautuminen
 

Selaa kokoelmaa

NimekkeetTekijätJulkaisuajatAsiasanatUusimmatSivukartta

Omat tiedot

Kirjaudu sisäänRekisteröidy
oulurepo@oulu.fiOulun yliopiston kirjastoOuluCRISLaturiMuuntaja
SaavutettavuusselosteTietosuojailmoitusYlläpidon kirjautuminen