Skip to Main content Skip to Navigation
Conference papers

Real-time Visual Prosody for Interactive Virtual Agents

Herwin van Welbergen 1, 2 Yu Ding 1, 2 Sattler Kai Catherine Pelachaud 1, 2 Stefan Kopp
1 MM - Multimédia
LTCI - Laboratoire Traitement et Communication de l'Information
Abstract : Speakers accompany their speech with incessant, subtle head movements. It is important to implement such "visual prosody" in virtual agents, not only to make their behavior more natural, but also because it has been shown to help listeners understand speech. We contribute a visual prosody model for interactive virtual agents that shall be capable of having live, non-scripted interactions with humans and thus have to use Text-To-Speech rather than recorded speech. We present our method for creating visual prosody online from continuous TTS output, and we report results from three crowdsourcing experiments carried out to see if and to what extent it can help in enhancing the interaction experience with an agent.
Complete list of metadatas

https://hal.telecom-paris.fr/hal-02412478
Contributor : Telecomparis Hal <>
Submitted on : Sunday, December 15, 2019 - 3:05:02 PM
Last modification on : Wednesday, June 24, 2020 - 4:19:55 PM

Identifiers

  • HAL Id : hal-02412478, version 1

Citation

Herwin van Welbergen, Yu Ding, Sattler Kai, Catherine Pelachaud, Stefan Kopp. Real-time Visual Prosody for Interactive Virtual Agents. International Conference on Intelligent Virtual Agent, Aug 2015, Delft, Netherlands. pp.139-151. ⟨hal-02412478⟩

Share

Metrics

Record views

32