Towards Speech Emotion Recognition Applied to Social Robots
Idioma
EN
Communication dans un congrès
Este ítem está publicado en
IEEE Conference on Artificial Intelligence, CAI 2023. Proceedings, 2024 L Latin American Computer Conference (CLEI), 2024-08-12, Buenos Aires. p. 1-10
IEEE
Resumen en inglés
Nowadays, the advancement of technology allows the use of social robots for various daily tasks such as therapies, teaching assistants, restaurant services, among others. Human-Robot Interaction (HRI) is under constant ...Leer más >
Nowadays, the advancement of technology allows the use of social robots for various daily tasks such as therapies, teaching assistants, restaurant services, among others. Human-Robot Interaction (HRI) is under constant study due to the new capabilities that robots acquire thanks to their improved hardware (e.g., more joints). Robots receive information through sensors such as cameras and microphones and can thus modify their behavior and adapt to different situations. However, an exhaustive real-time analysis of data within the robot requires excessive computing power and energy usage, which are limited in social robots. In this context, we propose a lightweight Machine Learning model to balance accuracy and audio processing time to recognize the emotions of happiness, sadness, anger, and neutral in real-time, aiming to improve HRI. Additionally, an empirical analysis to identify the most relevant audio features for emotion recognition is presented. The objective is to generate a lighter and more appropriate model for the robot's hardware. Results show better accuracy by using the RAVDESS, IEMOCAP, and RAVDESS+IEMOCAP datasets and a recognition time around 1 second.< Leer menos
Palabras clave en inglés
Emotion
Machine Learning
Social Robots
Speech Emotion Recognition
Centros de investigación