Nazir, SaqibVaquero Otal, LorenzoMucientes Molina, ManuelBrea Sánchez, Víctor ManuelColtuc, Daniela2025-11-102025-11-102022-10-18S. Nazir, L. Vaquero, M. Mucientes, V. M. Brea and D. Coltuc, "2HDED:Net for Joint Depth Estimation and Image Deblurring from a Single Out-of-Focus Image," 2022 IEEE International Conference on Image Processing (ICIP), Bordeaux, France, 2022, pp. 2006-2010, doi: 10.1109/ICIP46576.2022.9897352https://hdl.handle.net/10347/43658© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Depth estimation and all-in-focus image restoration from defocused RGB images are related problems, although most of the existing methods address them separately. The few approaches that solve both problems use a pipeline processing to derive a depth or defocus map as an intermediary product that serves as a support for image deblurring, which remains the primary goal. In this paper, we propose a new Deep Neural Network (DNN) architecture that performs in parallel the tasks of depth estimation and image deblurring, by attaching them the same importance. Our Two-headed Depth Estimation and Deblurring Network (2HDED:NET) is an encoderdecoder network for Depth from Defocus (DFD) that is extended with a deblurring branch, sharing the same encoder. The network is tested on NYU-Depth V2 dataset and compared with several state-of-the-art methods for depth estimation and image deblurring.engDepth from DefocusImage DeblurringDeep Learning2HDED:NET for joint depth estimation and image deblurring from a single out-of-focus imagejournal article10.1109/ICIP46576.2022.98973522381-8549open access