Afficher la notice abrégée

dc.rights.licenseopenen_US
dc.contributor.authorHEREDIA, Juanpablo
dc.contributor.authorLOPES-SILVA, Edmundo
dc.contributor.authorCARDINALE, Yudith
dc.contributor.authorDIAZ-AMADO, Jose
hal.structure.identifierESTIA INSTITUTE OF TECHNOLOGY
dc.contributor.authorDONGO, Irvin
dc.contributor.authorGRATEROL, Wilfredo
dc.contributor.authorAGUILERA, Ana
dc.date.accessioned2023-04-04T08:56:21Z
dc.date.available2023-04-04T08:56:21Z
dc.date.issued2022-02-07
dc.identifier.issn2169-3536en_US
dc.identifier.urihttps://oskar-bordeaux.fr/handle/20.500.12278/172708
dc.description.abstractEnEmotion recognition is a strategy for social robots used to implement better Human-Robot Interaction and model their social behaviour. Since human emotions can be expressed in different ways (e.g., face, gesture, voice), multimodal approaches are useful to support the recognition process. However, although there exist studies dealing with multimodal emotion recognition for social robots, they still present limitations in the fusion process, dropping their performance if one or more modalities are not present or if modalities have different qualities. This is a common situation in social robotics, due to the high variety of the sensory capacities of robots; hence, more flexible multimodal models are needed. In this context, we propose an adaptive and flexible emotion recognition architecture able to work with multiple sources and modalities of information and manage different levels of data quality and missing data, to lead robots to better understand the mood of people in a given environment and accordingly adapt their behaviour. Each modality is analyzed independently to then aggregate the partial results with a previous proposed fusion method, called EmbraceNet+, which is adapted and integrated to our proposed framework. We also present an extensive review of state-of-the-art studies dealing with fusion methods for multimodal emotion recognition approaches. We evaluate the performance of our proposed architecture by performing different tests in which several modalities are combined to classify emotions using four categories (i.e., happiness, neutral, sadness, and anger). Results reveal that our approach is able to adapt to the quality and presence of modalities. Furthermore, results obtained are validated and compared with other similar proposals, obtaining competitive performance with state-of-the-art models.
dc.language.isoENen_US
dc.rightsAttribution 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/us/*
dc.subject.enSocial robots
dc.subject.enMultimodal models
dc.subject.enEmotion recognition
dc.subject.enFusion process
dc.title.enAdaptive Multimodal Emotion Detection Architecture for Social Robots
dc.typeArticle de revueen_US
dc.identifier.doi10.1109/ACCESS.2022.3149214en_US
dc.subject.halInformatique [cs]en_US
bordeaux.journalIEEE Accessen_US
bordeaux.page20727-20744en_US
bordeaux.volume10en_US
bordeaux.hal.laboratoriesESTIA - Rechercheen_US
bordeaux.institutionUniversité de Bordeauxen_US
bordeaux.institutionBordeaux INPen_US
bordeaux.institutionBordeaux Sciences Agroen_US
bordeaux.peerReviewedouien_US
bordeaux.inpressnonen_US
bordeaux.import.sourcehal
hal.identifierhal-03593415
hal.version1
hal.exportfalse
workflow.import.sourcehal
dc.rights.ccPas de Licence CCen_US
bordeaux.COinSctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.jtitle=IEEE%20Access&rft.date=2022-02-07&rft.volume=10&rft.spage=20727-20744&rft.epage=20727-20744&rft.eissn=2169-3536&rft.issn=2169-3536&rft.au=HEREDIA,%20Juanpablo&LOPES-SILVA,%20Edmundo&CARDINALE,%20Yudith&DIAZ-AMADO,%20Jose&DONGO,%20Irvin&rft.genre=article


Fichier(s) constituant ce document

Thumbnail
Thumbnail

Ce document figure dans la(les) collection(s) suivante(s)

Afficher la notice abrégée