Multimodal Emotional Understanding in Robotics
dc.rights.license | open | en_US |
dc.contributor.author | HEREDIA, Juanpablo | |
dc.contributor.author | CARDINALE, Yudith | |
hal.structure.identifier | ESTIA - Institute of technology [ESTIA] | |
dc.contributor.author | DONGO, Irvin | |
dc.contributor.author | AGUILERA, Ana | |
dc.contributor.author | DIAZ-AMADO, Jose | |
dc.date.accessioned | 2023-04-05T09:07:46Z | |
dc.date.available | 2023-04-05T09:07:46Z | |
dc.date.issued | 2022-06-14 | |
dc.date.conference | 2022-06-20 | |
dc.identifier.uri | https://oskar-bordeaux.fr/handle/20.500.12278/172772 | |
dc.description.abstractEn | In the context of Human-Robot Interaction (HRI), emotional understanding is becoming more popular because it turns robots more humanized and user-friendly. Giving a robot the ability to recognize emotions has several difficulties due to the limits of the robots’ hardware and the real-world environments in which it works. In this sense, an out-of-robot approach and a multimodal approach can be the solution. This paper presents the implementation of a previous proposed multi-modal emotional system in the context of social robotics; that works on a server and bases its prediction in four modalities as inputs (face, posture, body, and context features) captured through the robot’s sensors; the predicted emotion triggers some robot behavior changes. Working on a server allows overcoming the robot’s hardware limitations but gaining some delay in the communication. Working with several modalities allows facing complex real-world scenarios strongly and adaptively. This research is focused on analyzing, explaining, and arguing the usability and viability of an out-of-robot and multimodal approach for emotional robots. Functionality tests were applied with the expected results, demonstrating that the entire proposed system takes around two seconds; delay justified on the deep learning models used, which are improvable. Regarding the HRI evaluations, a brief discussion about the remaining assessments is presented, explaining how difficult it can be a well-done evaluation of this work. The demonstration of the system functionality can be seen at https://youtu.be/MYYfazSa2N0. | |
dc.language.iso | EN | en_US |
dc.publisher | IOS Press | en_US |
dc.rights | Attribution-NonCommercial 3.0 United States | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc/3.0/us/ | * |
dc.subject.en | Emotional understanding | |
dc.subject.en | Human-robot interaction | |
dc.subject.en | Multimodal method | |
dc.title.en | Multimodal Emotional Understanding in Robotics | |
dc.type | Communication dans un congrès | en_US |
dc.identifier.doi | 10.3233/AISE220020 | en_US |
dc.subject.hal | Informatique [cs] | en_US |
bordeaux.page | 46-55 | en_US |
bordeaux.volume | 31 | en_US |
bordeaux.hal.laboratories | ESTIA - Recherche | en_US |
bordeaux.institution | Université de Bordeaux | en_US |
bordeaux.institution | Bordeaux INP | en_US |
bordeaux.institution | Bordeaux Sciences Agro | en_US |
bordeaux.conference.title | Workshops at 18th International Conference on Intelligent Environments (IE2022) | en_US |
bordeaux.country | fr | en_US |
bordeaux.title.proceeding | Ambient Intelligence and Smart Environments | en_US |
bordeaux.conference.city | Biarritz | en_US |
bordeaux.peerReviewed | oui | en_US |
bordeaux.import.source | hal | |
hal.identifier | hal-03714863 | |
hal.version | 1 | |
hal.export | false | |
workflow.import.source | hal | |
dc.rights.cc | Pas de Licence CC | en_US |
bordeaux.COinS | ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.date=2022-06-14&rft.volume=31&rft.spage=46-55&rft.epage=46-55&rft.au=HEREDIA,%20Juanpablo&CARDINALE,%20Yudith&DONGO,%20Irvin&AGUILERA,%20Ana&DIAZ-AMADO,%20Jose&rft.genre=unknown |