2HDED:NET for joint depth estimation and image deblurring from a single out-of-focus image
Loading...
Identifiers
Publication date
Advisors
Tutors
Editors
Journal Title
Journal ISSN
Volume Title
Publisher
IEEE
Abstract
Depth estimation and all-in-focus image restoration from defocused RGB images are related problems, although most of the existing methods address them separately. The few approaches that solve both problems use a pipeline processing to derive a depth or defocus map as an intermediary product that serves as a support for image deblurring, which remains the primary goal. In this paper, we propose a new Deep Neural Network (DNN) architecture that performs in parallel the tasks of depth estimation and image deblurring, by attaching them the same importance. Our Two-headed Depth Estimation and Deblurring Network (2HDED:NET) is an encoderdecoder network for Depth from Defocus (DFD) that is extended with a deblurring branch, sharing the same encoder. The network is tested on NYU-Depth V2 dataset and compared with several state-of-the-art methods for depth estimation and image deblurring.
Description
© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Keywords
Bibliographic citation
S. Nazir, L. Vaquero, M. Mucientes, V. M. Brea and D. Coltuc, "2HDED:Net for Joint Depth Estimation and Image Deblurring from a Single Out-of-Focus Image," 2022 IEEE International Conference on Image Processing (ICIP), Bordeaux, France, 2022, pp. 2006-2010, doi: 10.1109/ICIP46576.2022.9897352
Relation
Has part
Has version
Is based on
Is part of
Is referenced by
Is version of
Requires
Publisher version
https://doi.org/10.1109/ICIP46576.2022.9897352Sponsors
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 860370. The last author acknowledges financial support from UEFISCDI Romania grant 31/01.01.2021 PN III, 3.6 Suport.








