Afficher la notice abrégée

dc.rights.licenseopenen_US
hal.structure.identifierLaboratoire de l'intégration, du matériau au système [IMS]
dc.contributor.authorJOUFFROY, Emma
hal.structure.identifierLaboratoire de l'intégration, du matériau au système [IMS]
dc.contributor.authorGIREMUS, Audrey
IDREF: 163238766
hal.structure.identifierLaboratoire de l'intégration, du matériau au système [IMS]
dc.contributor.authorBERTHOUMIEU, Yannick
hal.structure.identifierLaboratoire de l'intégration, du matériau au système [IMS]
dc.contributor.authorBACH, Olivier
hal.structure.identifierLaboratoire de l'intégration, du matériau au système [IMS]
dc.contributor.authorHUGGET, Alain
dc.date.accessioned2023-11-14T10:36:30Z
dc.date.available2023-11-14T10:36:30Z
dc.date.issued2022-10-18
dc.date.conference2022-08-29
dc.identifier.issn2473-2001en_US
dc.identifier.urihttps://oskar-bordeaux.fr/handle/20.500.12278/184753
dc.description.abstractEnVariational auto-encoders (VAEs) are powerful generative neural networks based on latent variables. They aim to capture the distribution of a dataset, by building an informative space composed of a reduced number of variables. However, the size of this latent space is both sensitive and difficult to adjust. Thus, most state-of-the-art architectures experience either dis-entanglement issues, or, at the opposite, posterior collapse. Both phenomena impair the interpretability of the latent variables. In this paper, we propose a variant of the VAE which is able to automatically determine the informative components of the latent space. It consists in augmenting the vanilla VAE with auxiliary variables and defining a hierarchical model which favors that only a subset of the latent variables are used for the encoding. We refer to it as NGVAE. We compare its performance with other auto-encoder based architectures.
dc.language.isoENen_US
dc.subjectArchitecture
dc.subjectNeural networks
dc.subjectBuildings
dc.subjectEurope
dc.subjectSignal processing
dc.subjectEncoding
dc.subjectNeural net architecture
dc.subjectDeep neural networks
dc.subjectVariational inference
dc.subjectGenerative models
dc.subjectUnsupervised models
dc.title.enAutomatic selection of latent variables in variational auto-encoders
dc.typeCommunication dans un congrèsen_US
dc.identifier.doi10.23919/EUSIPCO55093.2022.9909746en_US
dc.subject.halSciences de l'ingénieur [physics]en_US
bordeaux.page1407-1411en_US
bordeaux.hal.laboratoriesIMS : Laboratoire de l'Intégration du Matériau au Système - UMR 5218en_US
bordeaux.institutionUniversité de Bordeauxen_US
bordeaux.institutionBordeaux INPen_US
bordeaux.institutionCNRSen_US
bordeaux.conference.title2022 30th European Signal Processing Conference (EUSIPCO)en_US
bordeaux.countryrsen_US
bordeaux.title.proceeding2022 30th European Signal Processing Conference (EUSIPCO)en_US
bordeaux.teamSIGNAL AND IMAGE PROCESSINGen_US
bordeaux.conference.cityBelgradeen_US
hal.identifierhal-04284248
hal.version1
hal.date.transferred2023-11-14T10:36:32Z
hal.invitedouien_US
hal.proceedingsouien_US
hal.conference.end2022-09-02
hal.popularnonen_US
hal.audienceInternationaleen_US
hal.exporttrue
dc.rights.ccPas de Licence CCen_US
bordeaux.COinSctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.date=2022-10-18&rft.spage=1407-1411&rft.epage=1407-1411&rft.eissn=2473-2001&rft.issn=2473-2001&rft.au=JOUFFROY,%20Emma&GIREMUS,%20Audrey&BERTHOUMIEU,%20Yannick&BACH,%20Olivier&HUGGET,%20Alain&rft.genre=unknown


Fichier(s) constituant ce document

FichiersTailleFormatVue

Il n'y a pas de fichiers associés à ce document.

Ce document figure dans la(les) collection(s) suivante(s)

Afficher la notice abrégée