Monocular depth estimation primed by salient point detection and normalized Hessian loss
Huynh, Lam; Pedone, Matteo; Nguyen, Phong; Matas, Jiri; Rahtu, Esa; Heikkilä, Janne (2022-01-06)
L. Huynh, M. Pedone, P. Nguyen, J. Matas, E. Rahtu and J. Heikkilä, "Monocular Depth Estimation Primed by Salient Point Detection and Normalized Hessian Loss," 2021 International Conference on 3D Vision (3DV), 2021, pp. 228-238, doi: 10.1109/3DV53792.2021.00033
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
https://urn.fi/URN:NBN:fi-fe2022022320592
Tiivistelmä
Abstract
Deep neural networks have recently thrived on single image depth estimation. That being said, current developments on this topic highlight an apparent compromise between accuracy and network size. This work proposes an accurate and lightweight framework for monocular depth estimation based on a self-attention mechanism stemming from salient point detection. Specifically, we utilize a sparse set of keypoints to train a FuSaNet model that consists of two major components: Fusion-Net and Saliency-Net. In addition, we introduce a normalized Hessian loss term invariant to scaling and shear along the depth direction, which is shown to substantially improve the accuracy. The proposed method achieves state-of-the-art results on NYU-Depth-v2 and KITTI while using 3.1–38.4 times smaller model in terms of the number of parameters than baseline approaches. Experiments on the SUN-RGBD further demonstrate the generalizability of the proposed method.
Kokoelmat
- Avoin saatavuus [37337]