Joint Inpainting of Depth and Reflectance with Visibility Estimation
BEVILACQUA, Marco
Laboratoire de l'intégration, du matériau au système [IMS]
Institut de Mathématiques de Bordeaux [IMB]
Laboratoire Bordelais de Recherche en Informatique [LaBRI]
Méthodes d'Analyses pour le Traitement d'Images et la Stéréorestitution [MATIS]
Leer más >
Laboratoire de l'intégration, du matériau au système [IMS]
Institut de Mathématiques de Bordeaux [IMB]
Laboratoire Bordelais de Recherche en Informatique [LaBRI]
Méthodes d'Analyses pour le Traitement d'Images et la Stéréorestitution [MATIS]
BEVILACQUA, Marco
Laboratoire de l'intégration, du matériau au système [IMS]
Institut de Mathématiques de Bordeaux [IMB]
Laboratoire Bordelais de Recherche en Informatique [LaBRI]
Méthodes d'Analyses pour le Traitement d'Images et la Stéréorestitution [MATIS]
< Leer menos
Laboratoire de l'intégration, du matériau au système [IMS]
Institut de Mathématiques de Bordeaux [IMB]
Laboratoire Bordelais de Recherche en Informatique [LaBRI]
Méthodes d'Analyses pour le Traitement d'Images et la Stéréorestitution [MATIS]
Idioma
en
Document de travail - Pré-publication
Resumen en inglés
This paper presents a novel strategy to generate, from 3-D lidar measures, dense depth and reflectance images coherent with given color images. It also estimates for each pixel of the input images a visibility attribute. ...Leer más >
This paper presents a novel strategy to generate, from 3-D lidar measures, dense depth and reflectance images coherent with given color images. It also estimates for each pixel of the input images a visibility attribute. 3-D lidar measures carry multiple information, e.g. relative distances to the sensor (from which we can compute depths) and reflectances. When projecting a lidar point cloud onto a reference image plane, we generally obtain sparse images, due to undersampling. Moreover, lidar and image sensor positions typically differ during acquisition; therefore points belonging to objects that are hidden from the image view point might appear in the lidar images. The proposed algorithm estimates the complete depth and reflectance images, while concurrently excluding those hidden points. It consists in solving a joint (depth and reflectance) variational image inpainting problem, with an extra variable to concurrently estimate handling the selection of visible points. As regularizers, two coupled total variation terms are included to match, two by two, the depth, reflectance, and color image gradients. We compare our algorithm with other image-guided depth upsampling methods, and show that, when dealing with real data, it produces better inpainted images, by solving the visibility issue.< Leer menos
Palabras clave en inglés
Inpainting
Total Variation
Depth Maps
Lidar
Reflectance
Point Cloud
Visibility
Orígen
Importado de HalCentros de investigación