Towards Speech Emotion Recognition Applied to Social Robots
Langue
EN
Communication dans un congrès
Ce document a été publié dans
IEEE Conference on Artificial Intelligence, CAI 2023. Proceedings, 2024 L Latin American Computer Conference (CLEI), 2024-08-12, Buenos Aires. p. 1-10
IEEE
Résumé en anglais
Nowadays, the advancement of technology allows the use of social robots for various daily tasks such as therapies, teaching assistants, restaurant services, among others. Human-Robot Interaction (HRI) is under constant ...Lire la suite >
Nowadays, the advancement of technology allows the use of social robots for various daily tasks such as therapies, teaching assistants, restaurant services, among others. Human-Robot Interaction (HRI) is under constant study due to the new capabilities that robots acquire thanks to their improved hardware (e.g., more joints). Robots receive information through sensors such as cameras and microphones and can thus modify their behavior and adapt to different situations. However, an exhaustive real-time analysis of data within the robot requires excessive computing power and energy usage, which are limited in social robots. In this context, we propose a lightweight Machine Learning model to balance accuracy and audio processing time to recognize the emotions of happiness, sadness, anger, and neutral in real-time, aiming to improve HRI. Additionally, an empirical analysis to identify the most relevant audio features for emotion recognition is presented. The objective is to generate a lighter and more appropriate model for the robot's hardware. Results show better accuracy by using the RAVDESS, IEMOCAP, and RAVDESS+IEMOCAP datasets and a recognition time around 1 second.< Réduire
Mots clés en anglais
Emotion
Machine Learning
Social Robots
Speech Emotion Recognition
Unités de recherche