Afficher la notice abrégée

dc.rights.licenseopenen_US
dc.contributor.authorHEREDIA, Juanpablo
dc.contributor.authorCARDINALE, Yudith
hal.structure.identifierESTIA INSTITUTE OF TECHNOLOGY
dc.contributor.authorDONGO, Irvin
dc.contributor.authorAGUILERA, Ana
dc.contributor.authorDIAZ-AMADO, Jose
dc.date.accessioned2023-04-05T09:07:46Z
dc.date.available2023-04-05T09:07:46Z
dc.date.issued2022-06-14
dc.date.conference2022-06-20
dc.identifier.urihttps://oskar-bordeaux.fr/handle/20.500.12278/172772
dc.description.abstractEnIn the context of Human-Robot Interaction (HRI), emotional understanding is becoming more popular because it turns robots more humanized and user-friendly. Giving a robot the ability to recognize emotions has several difficulties due to the limits of the robots’ hardware and the real-world environments in which it works. In this sense, an out-of-robot approach and a multimodal approach can be the solution. This paper presents the implementation of a previous proposed multi-modal emotional system in the context of social robotics; that works on a server and bases its prediction in four modalities as inputs (face, posture, body, and context features) captured through the robot’s sensors; the predicted emotion triggers some robot behavior changes. Working on a server allows overcoming the robot’s hardware limitations but gaining some delay in the communication. Working with several modalities allows facing complex real-world scenarios strongly and adaptively. This research is focused on analyzing, explaining, and arguing the usability and viability of an out-of-robot and multimodal approach for emotional robots. Functionality tests were applied with the expected results, demonstrating that the entire proposed system takes around two seconds; delay justified on the deep learning models used, which are improvable. Regarding the HRI evaluations, a brief discussion about the remaining assessments is presented, explaining how difficult it can be a well-done evaluation of this work. The demonstration of the system functionality can be seen at https://youtu.be/MYYfazSa2N0.
dc.language.isoENen_US
dc.publisherIOS Pressen_US
dc.rightsAttribution-NonCommercial 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc/3.0/us/*
dc.subject.enEmotional understanding
dc.subject.enHuman-robot interaction
dc.subject.enMultimodal method
dc.title.enMultimodal Emotional Understanding in Robotics
dc.typeCommunication dans un congrès avec actesen_US
dc.identifier.doi10.3233/AISE220020en_US
dc.subject.halInformatique [cs]en_US
bordeaux.page46-55en_US
bordeaux.volume31en_US
bordeaux.hal.laboratoriesESTIA - Rechercheen_US
bordeaux.institutionUniversité de Bordeauxen_US
bordeaux.institutionBordeaux INPen_US
bordeaux.institutionBordeaux Sciences Agroen_US
bordeaux.conference.titleWorkshops at 18th International Conference on Intelligent Environments (IE2022)en_US
bordeaux.countryfren_US
bordeaux.title.proceedingAmbient Intelligence and Smart Environmentsen_US
bordeaux.conference.cityBiarritzen_US
bordeaux.peerReviewedouien_US
bordeaux.import.sourcehal
hal.identifierhal-03714863
hal.version1
hal.exportfalse
workflow.import.sourcehal
dc.rights.ccPas de Licence CCen_US
bordeaux.COinSctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.date=2022-06-14&rft.volume=31&rft.spage=46-55&rft.epage=46-55&rft.au=HEREDIA,%20Juanpablo&CARDINALE,%20Yudith&DONGO,%20Irvin&AGUILERA,%20Ana&DIAZ-AMADO,%20Jose&rft.genre=proceeding


Fichier(s) constituant ce document

Thumbnail
Thumbnail

Ce document figure dans la(les) collection(s) suivante(s)

Afficher la notice abrégée