Afficher la notice abrégée

hal.structure.identifierLaboratoire Bordelais de Recherche en Informatique [LaBRI]
dc.contributor.authorBIASUTTI, Pierre
hal.structure.identifierLaboratoire Bordelais de Recherche en Informatique [LaBRI]
dc.contributor.authorLEPETIT, Vincent
hal.structure.identifierLaboratoire des Sciences et Technologies de l'Information Géographique [LaSTIG]
dc.contributor.authorBRÉDIF, Mathieu
hal.structure.identifierInstitut de Mathématiques de Bordeaux [IMB]
dc.contributor.authorAUJOL, Jean-François
hal.structure.identifierLaboratoire Bordelais de Recherche en Informatique [LaBRI]
dc.contributor.authorBUGEAU, Aurélie
dc.date.accessioned2024-04-04T02:59:25Z
dc.date.available2024-04-04T02:59:25Z
dc.date.conference2019-10-27
dc.identifier.urihttps://oskar-bordeaux.fr/handle/20.500.12278/192741
dc.description.abstractEnWe propose LU-Net---for LiDAR U-Net, a new method for the semantic segmentation of a 3D LiDAR point cloud. Instead of applying some global 3D segmentation method such as PointNet, we propose an end-to-end architecture for LiDAR point cloud semantic segmentation that efficiently solves the problem as an image processing problem. We first extract high-level 3D features for each point given its 3D neighbors. Then, these features are projected into a 2D multichannel range-image by considering the topology of the sensor. Thanks to these learned features and this projection, we can finally perform the segmentation using a simple U-Net segmentation network, which performs very well while being very efficient. In this way, we can exploit both the 3D nature of the data and the specificity of the LiDAR sensor. This approach outperforms the state-of-the-art by a large margin on the KITTI dataset, as our experiments show. Moreover, this approach operates at 24fps on a single GPU. This is above the acquisition rate of common LiDAR sensors which makes it suitable for real-time applications.
dc.language.isoen
dc.subject.enpoint cloud
dc.subject.encnn
dc.subject.en3d
dc.subject.enlidar
dc.subject.ensemantic segmentation
dc.subject.enu-net
dc.title.enLU-Net: An Efficient Network for 3D LiDAR Point Cloud Semantic Segmentation Based on End-to-End-Learned 3D Features and U-Net
dc.typeCommunication dans un congrès
dc.identifier.doi10.1109/ICCVW.2019.00123
dc.subject.halInformatique [cs]/Vision par ordinateur et reconnaissance de formes [cs.CV]
bordeaux.hal.laboratoriesInstitut de Mathématiques de Bordeaux (IMB) - UMR 5251*
bordeaux.institutionUniversité de Bordeaux
bordeaux.institutionBordeaux INP
bordeaux.institutionCNRS
bordeaux.conference.titleIEEE International Conference on Computer Vision Workshops (ICCV)
bordeaux.countryKR
bordeaux.conference.citySéoul
bordeaux.peerReviewedoui
hal.identifierhal-02269915
hal.version1
hal.invitednon
hal.proceedingsoui
hal.popularnon
hal.audienceInternationale
hal.origin.linkhttps://hal.archives-ouvertes.fr//hal-02269915v1
bordeaux.COinSctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.au=BIASUTTI,%20Pierre&LEPETIT,%20Vincent&BR%C3%89DIF,%20Mathieu&AUJOL,%20Jean-Fran%C3%A7ois&BUGEAU,%20Aur%C3%A9lie&rft.genre=unknown


Fichier(s) constituant ce document

FichiersTailleFormatVue

Il n'y a pas de fichiers associés à ce document.

Ce document figure dans la(les) collection(s) suivante(s)

Afficher la notice abrégée