Afficher la notice abrégée

hal.structure.identifierLaboratoire Bordelais de Recherche en Informatique [LaBRI]
dc.contributor.authorROUAS, Jean-Luc
hal.structure.identifierCognition, Langues, Langage, Ergonomie [CLLE-ERSS]
hal.structure.identifierLaboratoire Bordelais de Recherche en Informatique [LaBRI]
dc.contributor.authorSHOCHI, Takaaki
hal.structure.identifierCognition, Langues, Langage, Ergonomie [CLLE-ERSS]
dc.contributor.authorGUERRY, Marine
hal.structure.identifierLaboratoire d'Informatique pour la Mécanique et les Sciences de l'Ingénieur [LIMSI]
dc.contributor.authorRILLIARD, Albert
dc.date.accessioned2022-03-07T14:26:39Z
dc.date.available2022-03-07T14:26:39Z
dc.date.conference2019-08-04
dc.identifier.urihttps://oskar-bordeaux.fr/handle/20.500.12278/129801
dc.description.abstractEnIn this paper, we investigate the abilities of both human listeners and computers to categorise social affects using only speech. The database used is composed of speech recorded by 19 native Japanese speakers. It is first evaluated perceptually to rank speakers according to their perceived performance. The four best speakers are then selected to be used in a categorisation experiment in nine social affects spread across four broad categories. An automatic classification experiment is then carried out using prosodic features and voice quality related features. The automatic classification system takes advantages of a feature selection algorithm and uses Linear Discriminant Analysis. The results show that the performance obtained by automatic classification using only eight features is comparable to the performance produced by our set of listeners: three out of four broad categories are quite well identified whereas the seduction affect is poorly recognised either by the listeners or the computer.
dc.language.isoen
dc.source.titleInternational Congress of Phonetic Sciences ICPhS 2019
dc.subject.enProsodic analysis
dc.subject.enEx- pressive speech
dc.subject.enSpeech perception
dc.subject.enSocial attitudes
dc.title.enCategorisation of spoken social affects in Japanese: human vs. machine
dc.typeCommunication dans un congrès avec actes
dc.subject.halInformatique [cs]/Traitement du signal et de l'image
dc.subject.halSciences cognitives/Linguistique
bordeaux.hal.laboratoriesCLLE Montaigne : Cognition, langues, Langages, Ergonomie - UMR 5263*
bordeaux.institutionUniversité Bordeaux Montaigne
bordeaux.countryAU
bordeaux.title.proceedingInternational Congress of Phonetic Sciences ICPhS
bordeaux.conference.cityMelbourne
bordeaux.peerReviewedoui
hal.identifierhal-02317743
hal.version1
hal.origin.linkhttps://hal.archives-ouvertes.fr//hal-02317743v1
bordeaux.COinSctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.btitle=International%20Congress%20of%20Phonetic%20Sciences%20ICPhS%202019&rft.au=ROUAS,%20Jean-Luc&SHOCHI,%20Takaaki&GUERRY,%20Marine&RILLIARD,%20Albert&rft.genre=proceeding


Fichier(s) constituant ce document

FichiersTailleFormatVue

Il n'y a pas de fichiers associés à ce document.

Ce document figure dans la(les) collection(s) suivante(s)

Afficher la notice abrégée