Mostrar el registro sencillo del ítem
Categorisation of spoken social affects in Japanese: human vs. machine
hal.structure.identifier | Laboratoire Bordelais de Recherche en Informatique [LaBRI] | |
dc.contributor.author | ROUAS, Jean-Luc | |
hal.structure.identifier | Cognition, Langues, Langage, Ergonomie [CLLE-ERSS] | |
hal.structure.identifier | Laboratoire Bordelais de Recherche en Informatique [LaBRI] | |
dc.contributor.author | SHOCHI, Takaaki | |
hal.structure.identifier | Cognition, Langues, Langage, Ergonomie [CLLE-ERSS] | |
dc.contributor.author | GUERRY, Marine | |
hal.structure.identifier | Laboratoire d'Informatique pour la Mécanique et les Sciences de l'Ingénieur [LIMSI] | |
dc.contributor.author | RILLIARD, Albert | |
dc.date.accessioned | 2022-03-07T14:26:39Z | |
dc.date.available | 2022-03-07T14:26:39Z | |
dc.date.conference | 2019-08-04 | |
dc.identifier.uri | https://oskar-bordeaux.fr/handle/20.500.12278/129801 | |
dc.description.abstractEn | In this paper, we investigate the abilities of both human listeners and computers to categorise social affects using only speech. The database used is composed of speech recorded by 19 native Japanese speakers. It is first evaluated perceptually to rank speakers according to their perceived performance. The four best speakers are then selected to be used in a categorisation experiment in nine social affects spread across four broad categories. An automatic classification experiment is then carried out using prosodic features and voice quality related features. The automatic classification system takes advantages of a feature selection algorithm and uses Linear Discriminant Analysis. The results show that the performance obtained by automatic classification using only eight features is comparable to the performance produced by our set of listeners: three out of four broad categories are quite well identified whereas the seduction affect is poorly recognised either by the listeners or the computer. | |
dc.language.iso | en | |
dc.source.title | International Congress of Phonetic Sciences ICPhS 2019 | |
dc.subject.en | Prosodic analysis | |
dc.subject.en | Ex- pressive speech | |
dc.subject.en | Speech perception | |
dc.subject.en | Social attitudes | |
dc.title.en | Categorisation of spoken social affects in Japanese: human vs. machine | |
dc.type | Communication dans un congrès avec actes | |
dc.subject.hal | Informatique [cs]/Traitement du signal et de l'image | |
dc.subject.hal | Sciences cognitives/Linguistique | |
bordeaux.hal.laboratories | CLLE Montaigne : Cognition, langues, Langages, Ergonomie - UMR 5263 | * |
bordeaux.institution | Université Bordeaux Montaigne | |
bordeaux.country | AU | |
bordeaux.title.proceeding | International Congress of Phonetic Sciences ICPhS | |
bordeaux.conference.city | Melbourne | |
bordeaux.peerReviewed | oui | |
hal.identifier | hal-02317743 | |
hal.version | 1 | |
hal.origin.link | https://hal.archives-ouvertes.fr//hal-02317743v1 | |
bordeaux.COinS | ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.btitle=International%20Congress%20of%20Phonetic%20Sciences%20ICPhS%202019&rft.au=ROUAS,%20Jean-Luc&SHOCHI,%20Takaaki&GUERRY,%20Marine&RILLIARD,%20Albert&rft.genre=proceeding |
Archivos en el ítem
Archivos | Tamaño | Formato | Ver |
---|---|---|---|
No hay archivos asociados a este ítem. |