Mostrar el registro sencillo del ítem

hal.structure.identifierNational Key Laboratory of Fundamental Science on Synthetic Vision [Chengdu]
dc.contributor.authorXING, Guanyu
hal.structure.identifierCollege of Computer Science [Chengdu]
dc.contributor.authorLIU, Yanli
hal.structure.identifierTemple University [Philadelphia]
dc.contributor.authorLING, Haibin
hal.structure.identifierLaboratoire Photonique, Numérique et Nanosciences [LP2N]
hal.structure.identifierArcheovision CNRS
hal.structure.identifierMelting the frontiers between Light, Shape and Matter [MANAO]
dc.contributor.authorGRANIER, Xavier
hal.structure.identifierCollege of Computer Science [Chengdu]
dc.contributor.authorZHANG, Yanci
dc.date.accessioned2023-05-12T10:48:45Z
dc.date.available2023-05-12T10:48:45Z
dc.date.issued2020-04-01
dc.identifier.issn1077-2626
dc.identifier.urihttps://oskar-bordeaux.fr/handle/20.500.12278/181779
dc.description.abstractEnWe propose an automatic framework to recover the illumination of indoor scenes based on a single RGB-D image. Unlike previous works, our method can recover spatially varying illumination without using any lighting capturing devices or HDR information. The recovered illumination can produce realistic rendering results. To model the geometry of the visible and invisible parts of scenes corresponding to the input RGB-D image, we assume that all objects shown in the image are located in a box with six faces and build a geometry model based on the depth map. We then present a confidence-scoring based strategy to separate the light sources from the highlight areas. The positions of light sources both in and out of the camera's view are calculated based on the classification result and the recovered geometry model. Finally, an iterative procedure is proposed to calculate the colors of light sources and the materials in the scene. In addition, a data-driven method is used to set constraints on the light source intensities. Using the estimated light sources and geometry model, environment maps at different points in the scene are generated that can model the spatial variance of illumination. The experimental results demonstrate the validity of our approach.
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers
dc.subject.enDynamic range
dc.subject.enThree-dimensional displays
dc.subject.enCameras
dc.subject.enIllumination recovery
dc.subject.enLight sources
dc.subject.enProbes
dc.subject.enIndoor scenes
dc.subject.enSingle RGB-D image
dc.subject.enAutomatic
dc.subject.enGeometry
dc.subject.enLighting
dc.title.enAutomatic Spatially Varying Illumination Recovery of Indoor Scenes Based on a Single RGB-D Image
dc.typeArticle de revue
dc.identifier.doi10.1109/TVCG.2018.2876541
dc.subject.halInformatique [cs]/Synthèse d'image et réalité virtuelle [cs.GR]
bordeaux.journalIEEE Transactions on Visualization and Computer Graphics
bordeaux.page1672 - 1685
bordeaux.volume26
bordeaux.hal.laboratoriesLaboratoire Photonique, Numérique et Nanosciences (LP2N) - UMR 5298*
bordeaux.issue4
bordeaux.institutionUniversité de Bordeaux
bordeaux.institutionCNRS
bordeaux.peerReviewedoui
hal.identifierhal-01907554
hal.version1
hal.origin.linkhttps://hal.archives-ouvertes.fr//hal-01907554v1
bordeaux.COinSctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.jtitle=IEEE%20Transactions%20on%20Visualization%20and%20Computer%20Graphics&rft.date=2020-04-01&rft.volume=26&rft.issue=4&rft.spage=1672%20-%201685&rft.epage=1672%20-%201685&rft.eissn=1077-2626&rft.issn=1077-2626&rft.au=XING,%20Guanyu&LIU,%20Yanli&LING,%20Haibin&GRANIER,%20Xavier&ZHANG,%20Yanci&rft.genre=article


Archivos en el ítem

ArchivosTamañoFormatoVer

No hay archivos asociados a este ítem.

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem