Afficher la notice abrégée

dc.rights.licenseopenen_US
dc.contributor.authorRICHARD, Sebastien
dc.contributor.authorBEURTON-AIMAR, Marie Delcourt, Cecile
hal.structure.identifierBordeaux population health [BPH]
dc.contributor.authorDELCOURT, Cecile
ORCID: 0000-0002-2099-0481
IDREF: 035105291
hal.structure.identifierBordeaux population health [BPH]
dc.contributor.authorDELYFER, Marie-Noelle
dc.date.accessioned2024-12-09T10:43:10Z
dc.date.available2024-12-09T10:43:10Z
dc.date.issued2024-07-01
dc.date.conference2024-05-05
dc.identifier.issn0146-0404en_US
dc.identifier.urihttps://oskar-bordeaux.fr/handle/20.500.12278/203792
dc.description.abstractEnPurpose : To improve computer-assisted diagnosis methods by leveraging multimodal unlabeled datasets, naturally produced in ophthalmology practice. We apply Self-Supervised Learning (SSL) pretraining principles to jointly learn modality-invariant features from Optical Coherence Tomography (OCT) and Color Fundus Photograpy (CFP). Methods : We used a dataset of 450 pairs of OCT and CFP images, sourced from the Alienor (Antioxydants, Lipides Essentiels, Nutrition et maladies OculaiRes) epidemiological study. Each pair was captured at the same time and from the same eye. CFP were 1920x991 pixels, 3-channels images, and OCT images consisted of 19 B-scans, 32-bit 1024x496 pixels 2D images. A contrastive self-supervised objective was adapted from the SimCLR paper (Chen and al., 2020), using normalized temperature-scaled cross entropy loss to maximize cross-modality similarity within pairs. A Convolutional Neural Network (CNN) based on EfficientNetB3 (Tan and Le, 2020) was used as the encoder. A trainable projection head of 3 non-linear layers was added on top, to obtain a 512-dimensional shared modality space. A small module of 2 trainable convolutional layers was used to match OCT images to the same shape as CFP. The network was trained on various augmented views for 140 epochs, using the Adam optimizer and a Cosine Annealing learning rate scheduler, with a batch size of 16 pairs and 2 views per image, totaling 64 single views for a single loss computation. Results : We compiled labeled and open-access datasets, resulting in a binary task dataset of 2200 images indicating the presence or absence of AMD (drusen, exudation, hemorrhage). Our trained SSL model was fine-tuned and evaluated based on this dataset, using 5-fold cross validation. First, we froze our EfficientB3 encoder and fine-tuned a small 3-layer MLP on top, achieving an AUC of 0.810 (+/- 0.040). Subsequently, it was unfrozen and fully fine-tuned, achieving an AUC of 0.866 (+/- 0.022). It was compared to and found to outperform other pretraining methods, including ImageNet (AUC 0.814 +/- 0.031), self-supervision on only CFP modality (AUC 0.840 +/- 0.020) and even supervised pretraining on Alienor (AUC 0.827 +/- 0.033). Conclusions : Our study demonstrates the feasibility of learning valuable features for AMD classification without annotations, using a cross-modality contrastive learning objective from OCT and CFP images. This abstract was presented at the 2024 ARVO Imaging in the Eye Conference, held in Seattle, WA, May 4, 2024.
dc.language.isoENen_US
dc.title.enCross-modality Self-Supervised Learning from optical coherence tomography and color fundus for computer-assisted diagnosis of AMD
dc.typeCommunication dans un congrèsen_US
dc.subject.halSciences du Vivant [q-bio]/Santé publique et épidémiologieen_US
bordeaux.volume65en_US
bordeaux.hal.laboratoriesBordeaux Population Health Research Center (BPH) - UMR 1219en_US
bordeaux.issue9en_US
bordeaux.institutionUniversité de Bordeauxen_US
bordeaux.institutionINSERMen_US
bordeaux.conference.titleARVO 2024en_US
bordeaux.countryusen_US
bordeaux.title.proceedingARVO Imaging in the Eye Conference Abstracten_US
bordeaux.teamLEHA_BPHen_US
bordeaux.conference.citySeattleen_US
hal.identifierhal-04826625
hal.version1
hal.date.transferred2024-12-09T10:43:11Z
hal.proceedingsouien_US
hal.conference.organizerARVOen_US
hal.conference.end2024-05-09
hal.popularnonen_US
hal.audienceInternationaleen_US
hal.exporttrue
dc.rights.ccPas de Licence CCen_US
bordeaux.COinSctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.date=2024-07-01&rft.volume=65&rft.issue=9&rft.eissn=0146-0404&rft.issn=0146-0404&rft.au=RICHARD,%20Sebastien&BEURTON-AIMAR,%20Marie%20Delcourt,%20Cecile&DELCOURT,%20Cecile&DELYFER,%20Marie-Noelle&rft.genre=unknown


Fichier(s) constituant ce document

FichiersTailleFormatVue

Il n'y a pas de fichiers associés à ce document.

Ce document figure dans la(les) collection(s) suivante(s)

Afficher la notice abrégée