Skip to Main content Skip to Navigation
Conference papers

Modeling Multimodal Behaviors from Speech Prosody

Yu Ding 1, 2 Catherine Pelachaud 1, 2 Thierry Artières 3
1 MM - Multimédia
LTCI - Laboratoire Traitement et Communication de l'Information
3 MLIA - Machine Learning and Information Access
LIP6 - Laboratoire d'Informatique de Paris 6
Abstract : Head and eyebrow movements are an important communication mean. They are highly synchronized with speech prosody. Endowing virtual agent with synchronized verbal and nonverbal behavior enhances their communicative performance. In this paper, we propose an animation model for the virtual agent based on a statistical model linking speech prosody and facial movement. A fully parameterized Hidden Markov Model is proposed first to capture the tight relationship between speech and facial movement of a human face extracted from a video corpus and then to drive automatically virtual agent's behaviors from speech signals. The correlation between head and eyebrow movements is also taken into account during the building of the model. Subjective and objective evaluations were conducted to validate this model.
Complete list of metadatas

https://hal.telecom-paris.fr/hal-02412034
Contributor : Telecomparis Hal <>
Submitted on : Sunday, December 15, 2019 - 12:42:39 PM
Last modification on : Wednesday, January 8, 2020 - 1:06:21 AM

Links full text

Identifiers

Citation

Yu Ding, Catherine Pelachaud, Thierry Artières. Modeling Multimodal Behaviors from Speech Prosody. IVA 2013 - 13th International Conference on Intelligent Virtual Agents, Aug 2013, Edinburgh, United Kingdom. pp.217-228, ⟨10.1007/978-3-642-40415-3_19⟩. ⟨hal-02412034⟩

Share

Metrics

Record views

32