Guiding monocular depth estimation using depth-attention volume
Huynh, Lam; Nguyen-Ha, Phong; Matas, Jiri; Rahtu, Esa; Heikkilä, Janne (2020-11-13)
Huynh L., Nguyen-Ha P., Matas J., Rahtu E., Heikkilä J. (2020) Guiding Monocular Depth Estimation Using Depth-Attention Volume. In: Vedaldi A., Bischof H., Brox T., Frahm JM. (eds) Computer Vision – ECCV 2020. ECCV 2020. Lecture Notes in Computer Science, vol 12371. Springer, Cham. https://doi.org/10.1007/978-3-030-58574-7_35
© Springer Nature Switzerland AG 2020. This is a post-peer-review, pre-copyedit version of an article published in Computer Vision – ECCV 2020 - 16th European Conference, 2020, Proceedings. The final authenticated version is available online at: https://doi.org/10.1007/978-3-030-58574-7_35.
https://rightsstatements.org/vocab/InC/1.0/
https://urn.fi/URN:NBN:fi-fe202101071184
Tiivistelmä
Abstract
Recovering the scene depth from a single image is an ill-posed problem that requires additional priors, often referred to as monocular depth cues, to disambiguate different 3D interpretations. In recent works, those priors have been learned in an end-to-end manner from large datasets by using deep neural networks. In this paper, we propose guiding depth estimation to favor planar structures that are ubiquitous especially in indoor environments. This is achieved by incorporating a non-local coplanarity constraint to the network with a novel attention mechanism called depth-attention volume (DAV). Experiments on two popular indoor datasets, namely NYU-Depth-v2 and ScanNet, show that our method achieves state-of-the-art depth estimation results while using only a fraction of the number of parameters needed by the competing methods. Code is available at: https://github.com/HuynhLam/DAV.
Kokoelmat
- Avoin saatavuus [36660]