Afficher la notice abrégée

dc.contributor.authorHURAULT, Samuel
dc.contributor.authorLECLAIRE, Arthur
hal.structure.identifierInstitut de Mathématiques de Bordeaux [IMB]
dc.contributor.authorPAPADAKIS, Nicolas
dc.date.accessioned2024-04-04T02:42:31Z
dc.date.available2024-04-04T02:42:31Z
dc.date.conference2022-07-17
dc.identifier.urihttps://oskar-bordeaux.fr/handle/20.500.12278/191250
dc.description.abstractEnPlug-and-Play (PnP) methods solve ill-posed inverse problems through iterative proximal algorithms by replacing a proximal operator by a denoising operation. When applied with deep neural network denoisers, these methods have shown state-of-the-art visual performance for image restoration problems. However, their theoretical convergence analysis is still incomplete. Most of the existing convergence results consider nonexpansive denoisers, which is non-realistic, or limit their analysis to strongly convex data-fidelity terms in the inverse problem to solve. Recently, it was proposed to train the denoiser as a gradient descent step on a functional parameterized by a deep neural network. Using such a denoiser guarantees the convergence of the PnP version of the Half-Quadratic-Splitting (PnP-HQS) iterative algorithm. In this paper, we show that this gradient denoiser can actually correspond to the proximal operator of another scalar function. Given this new result, we exploit the convergence theory of proximal algorithms in the nonconvex setting to obtain convergence results for PnP-PGD (Proximal Gradient Descent) and PnP-ADMM (Alternating Direction Method of Multipliers). When built on top of a smooth gradient denoiser, we show that PnP-PGD and PnP-ADMM are convergent and target stationary points of an explicit functional. These convergence results are confirmed with numerical experiments on deblurring, super-resolution and inpainting.
dc.description.sponsorshipRepenser la post-production d'archives avec des méthodes à patch, variationnelles et par apprentissage - ANR-19-CE23-0027
dc.language.isoen
dc.title.enProximal denoiser for convergent plug-and-play optimization with nonconvex regularization
dc.typeCommunication dans un congrès
dc.subject.halStatistiques [stat]/Machine Learning [stat.ML]
dc.subject.halInformatique [cs]/Traitement du signal et de l'image
dc.identifier.arxiv2201.13256
bordeaux.hal.laboratoriesInstitut de Mathématiques de Bordeaux (IMB) - UMR 5251*
bordeaux.institutionUniversité de Bordeaux
bordeaux.institutionBordeaux INP
bordeaux.institutionCNRS
bordeaux.conference.titleInternational Conference on Machine Learning (ICML'22)
bordeaux.countryUS
bordeaux.conference.cityBaltimore
bordeaux.peerReviewedoui
hal.identifierhal-03550371
hal.version1
hal.invitednon
hal.proceedingsoui
hal.popularnon
hal.audienceInternationale
hal.origin.linkhttps://hal.archives-ouvertes.fr//hal-03550371v1
bordeaux.COinSctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.au=HURAULT,%20Samuel&LECLAIRE,%20Arthur&PAPADAKIS,%20Nicolas&rft.genre=unknown


Fichier(s) constituant ce document

FichiersTailleFormatVue

Il n'y a pas de fichiers associés à ce document.

Ce document figure dans la(les) collection(s) suivante(s)

Afficher la notice abrégée