Automatic selection of latent variables in variational auto-encoders
Langue
EN
Communication dans un congrès
Ce document a été publié dans
2022 30th European Signal Processing Conference (EUSIPCO), 2022 30th European Signal Processing Conference (EUSIPCO), 2022-08-29, Belgrade. 2022-10-18p. 1407-1411
Résumé en anglais
Variational auto-encoders (VAEs) are powerful generative neural networks based on latent variables. They aim to capture the distribution of a dataset, by building an informative space composed of a reduced number of ...Lire la suite >
Variational auto-encoders (VAEs) are powerful generative neural networks based on latent variables. They aim to capture the distribution of a dataset, by building an informative space composed of a reduced number of variables. However, the size of this latent space is both sensitive and difficult to adjust. Thus, most state-of-the-art architectures experience either dis-entanglement issues, or, at the opposite, posterior collapse. Both phenomena impair the interpretability of the latent variables. In this paper, we propose a variant of the VAE which is able to automatically determine the informative components of the latent space. It consists in augmenting the vanilla VAE with auxiliary variables and defining a hierarchical model which favors that only a subset of the latent variables are used for the encoding. We refer to it as NGVAE. We compare its performance with other auto-encoder based architectures.< Réduire
Mots clés
Architecture
Neural networks
Buildings
Europe
Signal processing
Encoding
Neural net architecture
Deep neural networks
Variational inference
Generative models
Unsupervised models
Unités de recherche