Show simple item record

dc.rights.licenseopenen_US
hal.structure.identifierInstitut des Systèmes Intelligents et de Robotique [ISIR]
dc.contributor.authorGALLAND, Lucie
hal.structure.identifierInstitut des Systèmes Intelligents et de Robotique [ISIR]
dc.contributor.authorPELACHAUD, Catherine
hal.structure.identifierSommeil, Addiction et Neuropsychiatrie [Bordeaux] [SANPSY]
dc.contributor.authorPECUNE, Florian
dc.date.accessioned2025-01-03T08:21:48Z
dc.date.available2025-01-03T08:21:48Z
dc.date.conference2024-05-28
dc.identifier.urioai:crossref.org:10.1109/fg59268.2024.10581979
dc.identifier.urihttps://oskar-bordeaux.fr/handle/20.500.12278/204128
dc.description.abstractEnMotivational Interviewing (MI) is an approach to therapy that emphasizes collaboration and encourages behavioral change. To evaluate the quality of an MI conversation, client utterances can be classified using the MISC code as either change talk, sustain talk, or follow/neutral talk. The proportion of change talk in a MI conversation is positively correlated with therapy outcomes, making accurate classification of client utterances essential. In this paper, we present a classifier that accurately distinguishes between the three MISC classes (change talk, sustain talk, and follow/neutral talk) leveraging multimodal features such as text, prosody, facial expressivity, and body expressivity. To train our model, we perform annotations on the publicly available AnnoMI dataset to collect multimodal information, including text, audio, facial expressivity, and body expressivity. Furthermore, we identify the most important modalities in the decision-making process, providing valuable insights into the interplay of different modalities during a MI conversation.
dc.description.sponsorshipEntrainement d'aptitudes sociales affectives personnalisées et adaptées avec des agents culturels virtuels - ANR-19-JSTS-0001en_US
dc.description.sponsorshipadaPtAtion de l'iNtelligence artificielle pOur l'inteRActiOn homme-MAchine - ANR-20-IADJ-0008en_US
dc.language.isoENen_US
dc.publisherIEEEen_US
dc.sourcecrossref
dc.subject.enChange talk
dc.subject.enMultimodality
dc.subject.enInterpretable
dc.title.enSeeing and hearing what has not been said: a multimodal client behavior classifier in Motivational Interviewing with interpretable fusion
dc.typeCommunication dans un congrèsen_US
dc.identifier.doi10.1109/fg59268.2024.10581979en_US
dc.subject.halInformatique [cs]/Interface homme-machine [cs.HC]en_US
bordeaux.page1-9en_US
bordeaux.hal.laboratoriesSANPSY (Sommeil, Addiction, Neuropsychiatrie) - UMR 6033en_US
bordeaux.institutionUniversité de Bordeauxen_US
bordeaux.institutionCNRSen_US
bordeaux.conference.title2024 IEEE 18th International Conference on Automatic Face and Gesture Recognition (FG)en_US
bordeaux.countrytren_US
bordeaux.conference.cityIstanbulen_US
bordeaux.import.sourcedissemin
hal.proceedingsouien_US
hal.conference.organizerIEEEen_US
hal.conference.end2024-05-31
hal.popularnonen_US
hal.audienceInternationaleen_US
hal.exportfalse
workflow.import.sourcedissemin
dc.rights.ccPas de Licence CCen_US
bordeaux.COinSctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.spage=1-9&rft.epage=1-9&rft.au=GALLAND,%20Lucie&PELACHAUD,%20Catherine&PECUNE,%20Florian&rft.genre=unknown


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record