Real-time Visual Prosody for Interactive Virtual Agents - Télécom Paris Accéder directement au contenu
Communication Dans Un Congrès Année : 2015

Real-time Visual Prosody for Interactive Virtual Agents

Résumé

Speakers accompany their speech with incessant, subtle head movements. It is important to implement such "visual prosody" in virtual agents, not only to make their behavior more natural, but also because it has been shown to help listeners understand speech. We contribute a visual prosody model for interactive virtual agents that shall be capable of having live, non-scripted interactions with humans and thus have to use Text-To-Speech rather than recorded speech. We present our method for creating visual prosody online from continuous TTS output, and we report results from three crowdsourcing experiments carried out to see if and to what extent it can help in enhancing the interaction experience with an agent.
Fichier non déposé

Dates et versions

hal-02412478 , version 1 (15-12-2019)

Identifiants

  • HAL Id : hal-02412478 , version 1

Citer

Herwin van Welbergen, Yu Ding, Sattler Kai, Catherine Pelachaud, Stefan Kopp. Real-time Visual Prosody for Interactive Virtual Agents. International Conference on Intelligent Virtual Agent, Aug 2015, Delft, Netherlands. pp.139-151. ⟨hal-02412478⟩
32 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More