Afficher la notice abrégée

hal.structure.identifierCognition, Langues, Langage, Ergonomie [CLLE-ERSS]
dc.contributor.authorSHOCHI, Takaaki
hal.structure.identifierLaboratoire Bordelais de Recherche en Informatique [LaBRI]
dc.contributor.authorROUAS, Jean-Luc
hal.structure.identifierCognition, Langues, Langage, Ergonomie [CLLE-ERSS]
dc.contributor.authorGUERRY, Marine
hal.structure.identifierSophia University [Tokyo]
dc.contributor.authorERICKSON, Donna
dc.date.accessioned2022-03-07T14:28:04Z
dc.date.available2022-03-07T14:28:04Z
dc.date.conference2018-09-02
dc.identifier.urihttps://oskar-bordeaux.fr/handle/20.500.12278/129988
dc.description.abstractEnThis study focuses on the cross-cultural differences in perception of audio visual prosodic recordings of Japanese social affects. The study compares cultural differences of perceptual patterns of 21 Japanese subjects with 20 French subjects who have no knowledge of Japanese language or Japanese social affects. The test material is a semantically affectively neutral utterance expressed in 9 various social affects by 2 Japanese speakers (one male, one female) who were chosen as best performers in our previous recognition experiment. The task was to create a specific audiovisual affect by choosing one video stimuli among 9 choices and one audio stimuli, again among 9 choices. The participants could preview each audio and video stimuli individually and also the combination of chosen stimuli. The results reveal that native subjects can correctly combine auditory and visually expressed social affects, showing some confusion inside semantic categories. Different matching patterns are observed for non-native subjects especially for a type of cultural-specific politeness.
dc.language.isoen
dc.source.titleInterspeech 2018
dc.subject.enCultural difference
dc.subject.enPattern matching
dc.subject.enMultisensory recognition
dc.title.enCultural Differences in Pattern Matching: Multisensory Recognition of Socio-affective Prosody
dc.typeCommunication dans un congrès avec actes
dc.identifier.doi10.21437/interspeech.2018-1795
dc.subject.halInformatique [cs]/Traitement du signal et de l'image
bordeaux.hal.laboratoriesCLLE Montaigne : Cognition, langues, Langages, Ergonomie - UMR 5263*
bordeaux.institutionUniversité Bordeaux Montaigne
bordeaux.countryIN
bordeaux.title.proceedingInterspeech 2018
bordeaux.conference.cityHyderabad
bordeaux.peerReviewedoui
hal.identifierhal-01913705
hal.version1
hal.origin.linkhttps://hal.archives-ouvertes.fr//hal-01913705v1
bordeaux.COinSctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.btitle=Interspeech%202018&rft.au=SHOCHI,%20Takaaki&ROUAS,%20Jean-Luc&GUERRY,%20Marine&ERICKSON,%20Donna&rft.genre=proceeding


Fichier(s) constituant ce document

FichiersTailleFormatVue

Il n'y a pas de fichiers associés à ce document.

Ce document figure dans la(les) collection(s) suivante(s)

Afficher la notice abrégée