Hyppää sisältöön
    • FI
    • ENG
  • FI
  • /
  • EN
OuluREPO – Oulun yliopiston julkaisuarkisto / University of Oulu repository
Näytä viite 
  •   OuluREPO etusivu
  • Oulun yliopisto
  • Avoin saatavuus
  • Näytä viite
  •   OuluREPO etusivu
  • Oulun yliopisto
  • Avoin saatavuus
  • Näytä viite
JavaScript is disabled for your browser. Some features of this site may not work without it.

PFCFuse: A Poolformer and CNN fusion network for Infrared-Visible Image Fusion

Hu, Xinyu; Liu, Yang; Yang, Feng (2024-09-02)

 
Avaa tiedosto
nbnfioulu-202503101935.pdf (12.29Mt)
Lataukset: 

URL:
https://doi.org/10.1109/TIM.2024.3450061

Hu, Xinyu
Liu, Yang
Yang, Feng
IEEE
02.09.2024

X. Hu, Y. Liu and F. Yang, "PFCFuse: A Poolformer and CNN Fusion Network for Infrared-Visible Image Fusion," in IEEE Transactions on Instrumentation and Measurement, vol. 73, pp. 1-14, 2024, Art no. 5029714, doi: 10.1109/TIM.2024.3450061

https://rightsstatements.org/vocab/InC/1.0/
© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists,or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
doi:https://doi.org/10.1109/TIM.2024.3450061
Näytä kaikki kuvailutiedot
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:oulu-202503101935
Tiivistelmä
Abstract

Infrared visible image fusion plays a central role in multimodal image fusion. By integrating feature information, we obtain more comprehensive and richer visual data to enhance image quality. However, current image fusion methods often rely on intricate networks to extract parameters from multimodal source images, making it challenging to leverage valuable information for high-quality fusion results completely. In this research, we propose a Poolformer-convolutional neural network (CNN) dual-branch feature extraction fusion network for the fusion of infrared and visible images, termed PFCFuse. This network fully exploits key features in the images and adaptively preserves critical features in the images. To begin with, we provide a feature extractor with a dual-branch poolformer-CNN, using poolformer blocks to extract low-frequency global information, where the basic spatial pooling procedures are used as a substitute for the attention module of the transformer. Second, the model is designed with an adaptively adjusted a-Huber loss, which can stably adjust model parameters and reduce the influence of outliers on model predictions, thereby enhancing the model’s robustness while maintaining precision. Compared with state-of-the-art fusion models such as U2Fusion, RFNet, TarDAL, and CDDFuse, we obtain excellent experimental results in both qualitative and quantitative experiments. Compared to the latest dual-branch feature extraction, CDDFuse, our model parameters are reduced by half. The code is available at https://github.com/HXY13/PFCFuse-Image-Fusion.
Kokoelmat
  • Avoin saatavuus [38865]
oulurepo@oulu.fiOulun yliopiston kirjastoOuluCRISLaturiMuuntaja
SaavutettavuusselosteTietosuojailmoitusYlläpidon kirjautuminen
 

Selaa kokoelmaa

NimekkeetTekijätJulkaisuajatAsiasanatUusimmatSivukartta

Omat tiedot

Kirjaudu sisäänRekisteröidy
oulurepo@oulu.fiOulun yliopiston kirjastoOuluCRISLaturiMuuntaja
SaavutettavuusselosteTietosuojailmoitusYlläpidon kirjautuminen