Afficher la notice abrégée

dc.rights.licenseopenen_US
dc.contributor.authorSAINT-GERMAIN, Logan
hal.structure.identifierLaboratoire de l'intégration, du matériau au système [IMS]
dc.contributor.authorLE GAL, Bertrand
hal.structure.identifierLaboratoire Bordelais de Recherche en Informatique [LaBRI]
dc.contributor.authorBALDACCI, Fabien
IDREF: 142618446
hal.structure.identifierLaboratoire de l'intégration, du matériau au système [IMS]
dc.contributor.authorCRENNE, Jeremie
hal.structure.identifierLaboratoire de l'intégration, du matériau au système [IMS]
dc.contributor.authorJEGO, Christophe
dc.contributor.authorLOTY, Sebastien
dc.date.accessioned2024-01-09T09:16:36Z
dc.date.available2024-01-09T09:16:36Z
dc.date.issued2022-10-25
dc.date.conference2022-11-02
dc.identifier.issn2473-2001en_US
dc.identifier.urihttps://oskar-bordeaux.fr/handle/20.500.12278/186969
dc.description.abstractEnArtificial Intelligence is now ubiquitous as nearly every application domain has found some use for it. The high computational complexity involved in its deployment has led to strong research activity in optimizing its integration in embedded systems. Research works on efficient implementations of CNNs on resource-constrained devices (eg. CPU, FPGA) largely focus on hardware based optimizations such as pruning, quantization or hardware accelerator. However, most performance improvements leading to efficient solutions in terms of memory, complexity and energy are located at the NN topology level, prior to any implementation step. This paper introduces a methodology called ANN2T (Artificial Neural Network to Target) which adapts a pre-trained deep neural network to a designated device with given optimization constraints. ANN2T leverages its included simplifications and/or transformations to progressively modify the deep neural network layers in order to meet the optimization target. Experiment results obtained on microcontroller device show ANN2T produces valuable trade-offs. It achieved up to 33% MACC and 37% memory footprint reductions with no accuracy loss on ResNet-18 topology over the CIFAR-10 dataset. This fully-automated methodology could be generalized to targets such as CPUs, GPUs or FPGAs.
dc.language.isoENen_US
dc.subjectPerformance evaluation
dc.subjectDeep learning
dc.subjectQuantization (signal)
dc.subjectNetwork topology
dc.subjectMicrocontrollers
dc.subjectSignal processing algorithms
dc.subjectArtificial neural networks
dc.subjectDNN
dc.subjectMachine Learning
dc.subjectEdge AI
dc.subjectTinyML
dc.subjectEmbedded Systems
dc.subjectLow Power Devices
dc.title.enMethodology to Adapt Neural Network on Constrained Device at Topology level
dc.typeCommunication dans un congrèsen_US
dc.identifier.doi10.1109/SiPS55645.2022.9919244en_US
dc.subject.halSciences de l'ingénieur [physics]en_US
bordeaux.page1-6en_US
bordeaux.hal.laboratoriesIMS : Laboratoire de l'Intégration du Matériau au Système - UMR 5218en_US
bordeaux.institutionUniversité de Bordeauxen_US
bordeaux.institutionBordeaux INPen_US
bordeaux.institutionCNRSen_US
bordeaux.conference.title2022 IEEE Workshop on Signal Processing Systems (SiPS)en_US
bordeaux.countryfren_US
bordeaux.title.proceeding2022 IEEE Workshop on Signal Processing Systems (SiPS)en_US
bordeaux.teamCIRCUIT DESIGN-CSNen_US
bordeaux.conference.cityRennesen_US
hal.identifierhal-04381562
hal.version1
hal.date.transferred2024-01-09T09:16:38Z
hal.invitedouien_US
hal.proceedingsouien_US
hal.conference.end2022-11-04
hal.popularnonen_US
hal.audienceInternationaleen_US
hal.exporttrue
dc.rights.ccPas de Licence CCen_US
bordeaux.COinSctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.date=2022-10-25&rft.spage=1-6&rft.epage=1-6&rft.eissn=2473-2001&rft.issn=2473-2001&rft.au=SAINT-GERMAIN,%20Logan&LE%20GAL,%20Bertrand&BALDACCI,%20Fabien&CRENNE,%20Jeremie&JEGO,%20Christophe&rft.genre=unknown


Fichier(s) constituant ce document

FichiersTailleFormatVue

Il n'y a pas de fichiers associés à ce document.

Ce document figure dans la(les) collection(s) suivante(s)

Afficher la notice abrégée